OpenAI Realtime API integration guide

How to use the OpenAI Realtime API with LiveKit Agents.

OpenAI Playground

Experiment with OpenAI's Realtime API in the playground with personalities like the Snarky Teenager or Opera Singer.

Overview

OpenAI's Realtime API enables low-latency, multimodal interactions with realtime audio and text processing. Use LiveKit's OpenAI plugin to create an agent that uses the Realtime API.

Note

Using Azure OpenAI? See our Azure OpenAI Realtime API guide.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the OpenAI plugin from PyPI:

pip install "livekit-agents[openai]~=1.0"

Authentication

The OpenAI plugin requires an OpenAI API key.

Set OPENAI_API_KEY in your .env file.

Usage

Use the OpenAI Realtime API within an AgentSession. For example, you can use it in the Voice AI quickstart.

from livekit.plugins import openai
session = AgentSession(
llm=openai.realtime.RealtimeModel(),
)

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.

modelstrOptionalDefault: 'gpt-4o-realtime-preview'

ID of the Realtime model to use. For a list of available models, see the Models.

voicestrOptionalDefault: 'alloy'

Voice to use for speech generation. For a list of available voices, see Voice options.

temperaturefloatOptionalDefault: 0.8

Valid values are between 0.6 and 1.2. To learn more, see temperature.

turn_detectionTurnDetection | NoneOptional

Configuration for turn detection, see the section on Turn detection for more information.

Turn detection

OpenAI's Realtime API includes voice activity detection (VAD) to automatically detect when a user has started or stopped speaking. This feature is enabled by default.

There are two modes for VAD:

  • Server VAD (default): Uses periods of silence to automatically chunk the audio.
  • Semantic VAD: Uses a semantic classifier to detect when the user has finished speaking based on their words.

Server VAD

Server VAD is the default mode and can be configured with the following properties:

from livekit.plugins.openai import realtime
from openai.types.beta.realtime.session import TurnDetection
session = AgentSession(
llm=realtime.RealtimeModel(
turn_detection=TurnDetection(
type="server_vad",
threshold=0.5,
prefix_padding_ms=300,
silence_duration_ms=500,
create_response=True,
interrupt_response=True,
)
),
)
  • threshold: Higher values require louder audio to activate, better for noisy environments.
  • prefix_padding_ms: Amount of audio to include before detected speech.
  • silence_duration_ms: Duration of silence to detect speech stop (shorter = faster turn detection).

Semantic VAD

Semantic VAD uses a classifier to determine when the user is done speaking based on their words. This mode is less likely to interrupt users mid-sentence or chunk transcripts prematurely.

from livekit.plugins.openai import realtime
from openai.types.beta.realtime.session import TurnDetection
session = AgentSession(
llm=realtime.RealtimeModel(
turn_detection=TurnDetection(
type="semantic_vad",
eagerness="auto",
create_response=True,
interrupt_response=True,
)
),
)

The eagerness property controls how quickly the model responds:

  • auto (default) - Equivalent to medium.
  • low - Lets users take their time speaking.
  • high - Chunks audio as soon as possible.
  • medium - Balanced approach.

For more information about turn detection in general, see the Turn detection guide.

Additional resources

The following resources provide more information about using OpenAI with LiveKit Agents.