OpenAI Realtime API integration guide

How to use the OpenAI Realtime API with LiveKit Agents.

OpenAI Playground

Experiment with OpenAI's Realtime API in the playground with personalities like the Snarky Teenager or Opera Singer.

Overview

OpenAI's Realtime API enables low-latency, multimodal interactions with realtime audio and text processing. Use LiveKit's OpenAI plugin to create an agent that uses the Realtime API.

Note

Using Azure OpenAI? See our Azure OpenAI Realtime API guide.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the OpenAI plugin from PyPI:

pip install "livekit-agents[openai]~=1.0rc"

Authentication

The OpenAI plugin requires an OpenAI API key.

Set OPENAI_API_KEY in your .env file.

Usage

Use the OpenAI Realtime API within an AgentSession. For example, you can use it in the Voice AI quickstart.

from livekit.plugins import openai
session = AgentSession(
llm=openai.realtime.RealtimeModel(),
)

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.

modelstrOptionalDefault: 'gpt-4o-realtime-preview'

ID of the Realtime model to use. For a list of available models, see the Models.

voicestrOptionalDefault: 'alloy'

Voice to use for speech generation. For a list of available voices, see Voice options.

temperaturefloatOptionalDefault: 0.8

Valid values are between 0.6 and 1.2. To learn more, see temperature.

turn_detectionTurnDetection | NoneOptional

Configuration for turn detection. Valid options are Server VAD or Semantic VAD. To learn more about configuration options, see turn_detection.

To learn more about turn detection, see Turn detection.

Additional resources

The following resources provide more information about using OpenAI with LiveKit Agents.