Create a new agent in your browser using this model
Overview
OpenAI models are available in LiveKit Agents through LiveKit Inference and the OpenAI plugin. Pricing for LiveKit Inference is available on the pricing page.
| Model name | Model ID | Providers |
|---|---|---|
| GPT-4o | openai/gpt-4o | azureopenai |
| GPT-4o mini | openai/gpt-4o-mini | azureopenai |
| GPT-4.1 | openai/gpt-4.1 | azureopenai |
| GPT-4.1 mini | openai/gpt-4.1-mini | azureopenai |
| GPT-4.1 nano | openai/gpt-4.1-nano | azureopenai |
| GPT-5 | openai/gpt-5 | azureopenai |
| GPT-5 mini | openai/gpt-5-mini | azureopenai |
| GPT-5 nano | openai/gpt-5-nano | azureopenai |
| GPT-5.1 | openai/gpt-5.1 | azureopenai |
| GPT-5.1 Chat Latest | openai/gpt-5.1-chat-latest | azureopenai |
| GPT-5.2 | openai/gpt-5.2 | azureopenai |
| GPT-5.2 Chat Latest | openai/gpt-5.2-chat-latest | azureopenai |
| GPT OSS 120B | openai/gpt-oss-120b | basetengroq(Cerebras coming soon) |
LiveKit Inference
Use LiveKit Inference to access OpenAI models without a separate OpenAI API key.
Usage
To use OpenAI, use the LLM class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(llm=inference.LLM(model="openai/gpt-5-mini",provider="openai",extra_kwargs={"reasoning_effort": "low"}),# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession, inference } from '@livekit/agents';session = new AgentSession({llm: new inference.LLM({model: "openai/gpt-5-mini",provider: "openai",modelOptions: {reasoning_effort: "low"}}),// ... tts, stt, vad, turn_detection, etc.});
Parameters
stringRequiredThe model to use for the LLM. Must be a model from OpenAI.
stringRequiredThe provider to use for the LLM. Must be openai to use OpenAI models and other parameters.
dictOptionalAdditional parameters to pass to the provider's Chat Completions API, such as reasoning_effort or max_completion_tokens.
In Node.js this parameter is called modelOptions.
String descriptors
As a shortcut, you can also pass a model descriptor string directly to the llm argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(llm="openai/gpt-4.1-mini",# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({llm: "openai/gpt-4.1-mini",// ... tts, stt, vad, turn_detection, etc.});
Plugin
Use the OpenAI plugin to connect directly to OpenAI's API with your own API key.
The OpenAI plugin supports the Responses API, which provides support for OpenAI's provider tools (WebSearch, FileSearch, CodeInterpreter) and is the recommended approach for direct OpenAI usage. Use openai.responses.LLM() to access the Responses API. The Chat Completions API is available via openai.LLM() and is used for OpenAI-compatible endpoints (like openai.LLM.with_cerebras()). See API modes for more information.
Installation
Install the plugin from PyPI:
uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x
Authentication
The OpenAI plugin requires an OpenAI API key.
Set OPENAI_API_KEY in your .env file.
Usage
Use OpenAI within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.
from livekit.plugins import openai# Use Responses API (recommended)session = AgentSession(llm=openai.responses.LLM(model="gpt-4.1"),# ... tts, stt, vad, turn_detection, etc.)
import * as openai from '@livekit/agents-plugin-openai';const session = new voice.AgentSession({llm: new openai.responses.LLM({model: "gpt-4.1"}),// ... tts, stt, vad, turn_detection, etc.});
API modes
The OpenAI plugin supports two API modes: Responses API and Chat Completions API.
Responses API (Recommended)
The Responses API is the recommended mode. It provides:
- Support for OpenAI's provider tools (
WebSearch,FileSearch,CodeInterpreter) - Better performance and features
- Access to the latest OpenAI capabilities
- Lower costs
Use openai.responses.LLM() to access the Responses API:
from livekit.plugins import openai# Use Responses API (recommended)session = AgentSession(llm=openai.responses.LLM(model="gpt-4.1"),# ... tts, stt, vad, turn_detection, etc.)
Chat Completions API
The Chat Completions API is available via openai.LLM(). This API mode is used for:
- OpenAI-compatible endpoints: Providers like Cerebras, Fireworks, Groq, etc. use
openai.LLM.with_*()methods which rely on the Chat Completions API format (see OpenAI-compatible endpoints) - Legacy code compatibility: Existing code that uses
openai.LLM()directly
For direct OpenAI platform usage, use openai.responses.LLM() instead of openai.LLM(). The Responses API provides better features and performance.
To use Chat Completions mode directly with OpenAI (not recommended for new projects):
from livekit.plugins import openai# Chat Completions API (use openai.responses.LLM() for new projects)session = AgentSession(llm=openai.LLM(model="gpt-4.1"),# ... tts, stt, vad, turn_detection, etc.)
import * as openai from '@livekit/agents-plugin-openai';// Chat Completions API (use openai.responses.LLM() for new projects)const session = new voice.AgentSession({llm: openai.LLM({ model: "gpt-4o-mini" }),// ... tts, stt, vad, turn_detection, etc.});
OpenAI-compatible endpoints
When using OpenAI-compatible endpoints with providers using Chat Completions mode, you should use openai.LLM() with the provider's with_*() method. These providers include:
- Cerebras:
openai.LLM.with_cerebras() - Fireworks:
openai.LLM.with_fireworks() - Groq:
openai.LLM.with_groq() - Perplexity:
openai.LLM.with_perplexity() - Telnyx:
openai.LLM.with_telnyx() - Together AI:
openai.LLM.with_together() - xAI:
openai.LLM.with_x_ai() - DeepSeek:
openai.LLM.with_deepseek()
These providers are built on the Chat Completions API format, so they use openai.LLM() (not openai.responses.LLM()). The with_*() methods automatically configure the correct API mode. See the individual provider documentation for specific usage examples.
Parameters
This section describes some of the available parameters. See the plugin reference links in the Additional resources section for a complete list of all available parameters.
stringOptionalDefault: gpt-4.1The model to use for the LLM. For more information, see the OpenAI documentation.
floatOptionalDefault: 0.8Controls the randomness of the model's output. Higher values, for example 0.8, make the output more random, while lower values, for example 0.2, make it more focused and deterministic.
Valid values are between 0 and 2.
ToolChoice | Literal['auto', 'required', 'none']OptionalDefault: autoControls how the model uses tools. Set to 'auto' to let the model decide, 'required' to force tool usage, or 'none' to disable tool usage.
Additional resources
The following resources provide more information about using OpenAI with LiveKit Agents.