Skip to main content

OpenAI LLM models

How to use OpenAI models with LiveKit Agents.

Use in Agent Builder

Create a new agent in your browser using this model

Overview

OpenAI models are available in LiveKit Agents through LiveKit Inference and the OpenAI plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.

LiveKit Inference

Use LiveKit Inference to access OpenAI models without a separate OpenAI API key.

Model nameModel IDProviders
GPT-4.1
openai/gpt-4.1
azureopenai
GPT-4.1 mini
openai/gpt-4.1-mini
azureopenai
GPT-4.1 nano
openai/gpt-4.1-nano
azureopenai
GPT-4o
openai/gpt-4o
azureopenai
GPT-4o mini
openai/gpt-4o-mini
azureopenai
GPT-5
openai/gpt-5
azureopenai
GPT-5 mini
openai/gpt-5-mini
azureopenai
GPT-5 nano
openai/gpt-5-nano
azureopenai
GPT-5.1
openai/gpt-5.1
azureopenai
GPT-5.1 Chat
openai/gpt-5.1-chat-latest
azureopenai
GPT-5.2
openai/gpt-5.2
azureopenai
GPT-5.2 Chat
openai/gpt-5.2-chat-latest
azureopenai
GPT-5.3 Chat
openai/gpt-5.3-chat-latest
azureopenai
GPT-5.4
openai/gpt-5.4
azureopenai
GPT OSS 120B
openai/gpt-oss-120b
basetengroq(Cerebras coming soon)

Usage

To use OpenAI, use the LLM class from the inference module. You can use this LLM in the Voice AI quickstart:

from livekit.agents import AgentSession, inference
session = AgentSession(
llm=inference.LLM(
model="openai/gpt-5-mini",
provider="openai",
extra_kwargs={
"reasoning_effort": "low"
}
),
# ... tts, stt, vad, turn_handling, etc.
)
import { AgentSession, inference } from '@livekit/agents';
const session = new AgentSession({
llm: new inference.LLM({
model: "openai/gpt-5-mini",
provider: "openai",
modelOptions: {
reasoning_effort: "low"
}
}),
// ... tts, stt, vad, turnHandling, etc.
});

Parameters

The following are parameters for configuring OpenAI models with LiveKit Inference. For model behavior parameters like temperature and reasoning_effort, see model parameters.

model
Required
string

The model to use for the LLM. Must be a model from OpenAI. See model IDs in the models list.

provider
Required
string

Must be openai to use OpenAI models and other parameters.

extra_kwargsdict

Additional parameters to pass to the OpenAI Chat Completions API. See model parameters for supported fields.

In Node.js this parameter is called modelOptions.

Model parameters

Pass the following parameters inside extra_kwargs (Python) or modelOptions (Node.js). For more details about each parameter in the list, see Inference parameters.

ParameterTypeDefaultNotes
temperaturefloat1Controls the randomness of the model's output. Valid range: 0-2. Not supported by reasoning models.
top_pfloat1Alternative to temperature. Valid range: 0-1. Not supported by reasoning models.
max_tokensintMaximum tokens to generate. Use max_completion_tokens for newer models.
max_completion_tokensintMaximum tokens to generate, including reasoning tokens. Preferred over max_tokens for newer models.
reasoning_effort"low" | "medium" | "high"Controls reasoning depth. Only supported by reasoning models (o1, o3, o4, gpt-5 prefixes).
frequency_penaltyfloat0Reduces the model's likelihood to repeat the same line verbatim. Valid range: -2.0-2.0. Not supported by reasoning models.
presence_penaltyfloat0Increases the model's likelihood to talk about new topics. Valid range: -2.0-2.0. Not supported by reasoning models.
seedintEnables deterministic sampling. The system makes a best effort to return the same result for identical requests.
stopstr | list[str]Sequences that stop generation. Up to 4 sequences.
nintNumber of completions to generate. Not supported by reasoning models.
logprobsboolReturns log probabilities of each output token. Not supported by reasoning models.
top_logprobsintNumber of most likely tokens to return at each position. Valid range: 0-20. Requires logprobs: true. Not supported by reasoning models.
logit_biasdict[str, int]Adjusts likelihood of specified tokens appearing in the output. Not supported by reasoning models.
parallel_tool_callsboolWhether the model can make multiple tool calls in a single response.
tool_choiceToolChoice | Literal['auto', 'required', 'none']"auto"Controls how the model uses tools.

String descriptors

As a shortcut, you can also pass a model ID descriptor string directly to the llm argument in your AgentSession:

from livekit.agents import AgentSession
session = AgentSession(
llm="openai/gpt-4.1-mini",
# ... tts, stt, vad, turn_handling, etc.
)
import { AgentSession } from '@livekit/agents';
const session = new AgentSession({
llm: "openai/gpt-4.1-mini",
// ... tts, stt, vad, turnHandling, etc.
});

Plugin

LiveKit's plugin support for OpenAI lets you connect directly to OpenAI's API with your own API key.

Available in
Python
|
Node.js
OpenAI Responses API (Recommended)

The OpenAI plugin supports the Responses API, which provides support for OpenAI's provider tools (WebSearch, FileSearch, CodeInterpreter) and is the recommended approach for direct OpenAI usage. Use openai.responses.LLM() to access the Responses API. The Chat Completions API is available via openai.LLM() and is used for OpenAI-compatible endpoints (like openai.LLM.with_cerebras()). See API modes for more information.

Installation

Install the plugin:

uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x

Authentication

The OpenAI plugin requires an OpenAI API key.

Set OPENAI_API_KEY in your .env file.

Usage

Use OpenAI within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.

from livekit.plugins import openai
# Use Responses API (recommended)
session = AgentSession(
llm=openai.responses.LLM(
model="gpt-4.1"
),
# ... tts, stt, vad, turn_handling, etc.
)
import { voice } from '@livekit/agents';
import * as openai from '@livekit/agents-plugin-openai';
const session = new voice.AgentSession({
llm: new openai.responses.LLM({
model: "gpt-4.1"
}),
// ... tts, stt, vad, turnHandling, etc.
});

API modes

The OpenAI plugin supports two API modes: Responses API and Chat Completions API.

Responses API (Recommended)

The Responses API is the recommended mode. It provides:

  • Support for OpenAI's provider tools (WebSearch, FileSearch, CodeInterpreter)
  • Better performance and features
  • Access to the latest OpenAI capabilities
  • Lower costs

Use openai.responses.LLM() to access the Responses API:

from livekit.plugins import openai
# Use Responses API (recommended)
session = AgentSession(
llm=openai.responses.LLM(model="gpt-4.1"),
# ... tts, stt, vad, turn_handling, etc.
)
import { voice } from '@livekit/agents';
import * as openai from '@livekit/agents-plugin-openai';
// Use Responses API (recommended)
const session = new voice.AgentSession({
llm: new openai.responses.LLM({ model: "gpt-4.1" }),
// ... tts, stt, vad, turn_detection, etc.
});

Chat Completions API

The Chat Completions API is available via openai.LLM(). This API mode is used for:

  • OpenAI-compatible endpoints: Providers like Cerebras, Fireworks, Groq, etc. use openai.LLM.with_*() methods which rely on the Chat Completions API format (see OpenAI-compatible endpoints)
  • Legacy code compatibility: Existing code that uses openai.LLM() directly
For direct OpenAI usage

For direct OpenAI platform usage, use openai.responses.LLM() instead of openai.LLM(). The Responses API provides better features and performance.

To use Chat Completions mode directly with OpenAI (not recommended for new projects):

from livekit.plugins import openai
# Chat Completions API (use openai.responses.LLM() for new projects)
session = AgentSession(
llm=openai.LLM(model="gpt-4.1"),
# ... tts, stt, vad, turn_handling, etc.
)
import { voice } from '@livekit/agents';
import * as openai from '@livekit/agents-plugin-openai';
// Chat Completions API (use openai.responses.LLM() for new projects)
const session = new voice.AgentSession({
llm: new openai.LLM({ model: "gpt-4.1" }),
// ... tts, stt, vad, turnHandling, etc.
});

OpenAI-compatible endpoints

When using OpenAI-compatible endpoints with providers using Chat Completions mode, you should use openai.LLM() with the provider's with_*() method. These providers include:

  • Cerebras: openai.LLM.with_cerebras()
  • Fireworks: openai.LLM.with_fireworks()
  • Groq: openai.LLM.with_groq()
  • Perplexity: openai.LLM.with_perplexity()
  • Telnyx: openai.LLM.with_telnyx()
  • Together AI: openai.LLM.with_together()
  • xAI: openai.LLM.with_x_ai()
  • DeepSeek: openai.LLM.with_deepseek()

These providers are built on the Chat Completions API format, so they use openai.LLM() (not openai.responses.LLM()). The with_*() methods automatically configure the correct API mode. See the individual provider documentation for specific usage examples.

Parameters

This section describes some of the available parameters. See the plugin reference links in the Additional resources section for a complete list of all available parameters.

modelstringDefault: gpt-4.1

The model to use for the LLM. For more information, see the OpenAI documentation.

temperaturefloatDefault: 0.8

Sampling temperature that controls the randomness of the model's output. Higher values make the output more random, while lower values make it more focused and deterministic. Range of valid values can vary by model.

Valid values are between 0 and 2.

tool_choiceToolChoice | Literal['auto', 'required', 'none']Default: auto

Controls how the model uses tools. String options are as follows:

  • 'auto': Let the model decide.
  • 'required': Force tool usage.
  • 'none': Disable tool usage.

Provider tools

ONLY Available in
Python

OpenAI supports the following provider tools that enable the model to use built-in capabilities executed on the model server. These tools can be used alongside function tools defined in your agent's codebase. Provider tools require the Responses API.

ToolDescriptionParameters
WebSearchSearch the web for up-to-date information.search_context_size (low, medium, high), user_location, filters
FileSearchSearch uploaded document collections via vector stores.vector_store_ids (required), filters, max_num_results, ranking_options
CodeInterpreterExecute Python code in a sandboxed environment.container
from livekit.plugins import openai
agent = MyAgent(
llm=openai.responses.LLM(model="gpt-4.1"),
tools=[openai.tools.WebSearch()], # replace with any supported provider tool
)

Additional resources

The following resources provide more information about using OpenAI with LiveKit Agents.