Create a new agent in your browser using this model
Overview
ElevenLabs text-to-speech is available in LiveKit Agents through LiveKit Inference and the ElevenLabs plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.
LiveKit Inference
Use LiveKit Inference to access ElevenLabs TTS without a separate ElevenLabs API key.
| Model name | Model ID | Languages |
|---|---|---|
Eleven Flash v2 | elevenlabs/eleven_flash_v2 | en |
Eleven Flash v2.5 | elevenlabs/eleven_flash_v2_5 | enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi |
Eleven Multilingual v2 | elevenlabs/eleven_multilingual_v2 | enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukru |
Eleven Turbo v2 | elevenlabs/eleven_turbo_v2 | en |
Eleven Turbo v2.5 | elevenlabs/eleven_turbo_v2_5 | enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi |
Usage
To use ElevenLabs, use the TTS class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(tts=inference.TTS(model="elevenlabs/eleven_turbo_v2_5",voice="Xb7hH8MSUJpSbSDYk0k2",language="en"),# ... llm, stt, vad, turn_handling, etc.)
import { AgentSession, inference } from '@livekit/agents';session = new AgentSession({tts: new inference.TTS({model: "elevenlabs/eleven_turbo_v2_5",voice: "Xb7hH8MSUJpSbSDYk0k2",language: "en"}),// ... llm, stt, vad, turnHandling, etc.});
Parameters
modelstringThe model ID from the models list.
voicestringSee voices for guidance on selecting a voice.
languageLanguageCodeLanguage code for the input text. If not set, the model default applies.
extra_kwargsdictAdditional parameters to pass to the ElevenLabs TTS API. See model parameters for supported fields.
In Node.js this parameter is called modelOptions.
Model parameters
Pass the following parameters inside extra_kwargs (Python) or modelOptions (Node.js):
| Parameter | Type | Default | Notes |
|---|---|---|---|
inactivity_timeout | int | 60 | Seconds of inactivity before the session closes. Valid range: 5–180. |
apply_text_normalization | "auto" | "off" | "on" | "auto" | Whether to normalize text before synthesis (expand numbers, abbreviations, etc.). |
auto_mode | bool | Enable auto mode for optimized streaming. Reduces latency by automatically determining chunk sizes. | |
enable_logging | bool | Whether to enable request logging on the ElevenLabs side. | |
enable_ssml_parsing | bool | Whether to parse SSML tags in the input text. Allows custom pronunciation using SSML. | |
sync_alignment | bool | Whether to return word-level alignment data with each audio chunk. | |
language_code | str | Language code for the output audio (for example, "en", "fr"). Overrides the top-level language parameter for the ElevenLabs API call. | |
stability | float | Voice stability. Higher values produce more consistent output; lower values allow more expressive variation. Valid range: 0–1. | |
similarity_boost | float | How closely the output should match the original voice. Higher values increase similarity but might introduce artifacts. Valid range: 0–1. | |
style | float | Style strength of the voice. Valid range: 0–1. | |
speed | float | Speaking speed. Valid range: 0.25–4. | |
use_speaker_boost | bool | Whether to boost speaker clarity and target speaker similarity. | |
chunk_length_schedule | list[float] | List of chunk sizes in characters that controls when audio is flushed during streaming. Valid values per entry: 50–500. | |
preferred_alignment | str | Preferred alignment mode for word-level timestamps. |
Voices
LiveKit Inference supports all of the default voices available in the ElevenLabs API. You can explore the available voices in the ElevenLabs voice library (free account required), and use the voice by copying its ID into your LiveKit agent session.
Custom and community ElevenLabs voices, including voice cloning, are not yet supported in LiveKit Inference. To use these voices, create your own ElevenLabs account and use the ElevenLabs plugin for LiveKit Agents instead.
The following is a small sample of the ElevenLabs voices available in LiveKit Inference.
String descriptors
As a shortcut, you can also pass a descriptor with the model ID and voice directly to the tts argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",# ... llm, stt, vad, turn_handling, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: "elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",// ... tts, stt, vad, turnHandling, etc.});
Plugin
LiveKit's plugin support for ElevenLabs lets you connect directly to ElevenLabs' TTS API with your own API key.
Installation
Install the plugin from PyPI:
uv add "livekit-agents[elevenlabs]~=1.4"
pnpm add @livekit/agents-plugin-elevenlabs@1.x
Authentication
The ElevenLabs plugin requires an ElevenLabs API key.
Set ELEVEN_API_KEY in your .env file.
Usage
Use ElevenLabs TTS within an AgentSession or as a standalone speech generator. For example, you can use this TTS in the Voice AI quickstart.
from livekit.plugins import elevenlabssession = AgentSession(tts=elevenlabs.TTS(voice_id="ODq5zmih8GrVes37Dizd",model="eleven_multilingual_v2")# ... llm, stt, etc.)
import * as elevenlabs from '@livekit/agents-plugin-elevenlabs';const session = new voice.AgentSession({tts: new elevenlabs.TTS(voice: { id: "ODq5zmih8GrVes37Dizd" },model: "eleven_multilingual_v2"),// ... llm, stt, etc.});
Parameters
This section describes some of the parameters you can set when you create an ElevenLabs TTS. See the plugin reference links in the Additional resources section for a complete list of all available parameters.
modelstringDefault: eleven_flash_v2_5ID of the model to use for generation. To learn more, see the ElevenLabs documentation.
voice_idstringDefault: EXAVITQu4vr4xnSDxMaLID of the voice to use for generation. To learn more, see the ElevenLabs documentation.
voice_settingsVoiceSettingsVoice configuration. To learn more, see the ElevenLabs documentation.
stabilityfloatsimilarity_boostfloatstylefloatuse_speaker_boostboolspeedfloat
languageLanguageCodeDefault: enLanguage code for the output audio. To learn more, see the ElevenLabs documentation.
streaming_latencyintDefault: 3Latency in seconds for streaming.
enable_ssml_parsingboolDefault: falseEnable Speech Synthesis Markup Language (SSML) parsing for input text. Set to true to customize pronunciation using SSML.
chunk_length_schedulelist[int]Default: [80, 120, 200, 260]Schedule for chunk lengths. Valid values range from 50 to 500.
Customizing pronunciation
ElevenLabs supports custom pronunciation for specific words or phrases with SSML phoneme tags. This is useful to ensure correct pronunciation of certain words, even when missing from the voice's lexicon. To learn more, see Pronunciation.
Transcription timing
ElevenLabs TTS supports aligned transcription forwarding, which improves transcription synchronization in your frontend. Set use_tts_aligned_transcript=True in your AgentSession configuration to enable this feature. To learn more, see the docs.
Additional resources
The following resources provide more information about using ElevenLabs with LiveKit Agents.