Skip to main content

ElevenLabs TTS

How to use ElevenLabs TTS with LiveKit Agents.

Use in Agent Builder

Create a new agent in your browser using this model

Overview

ElevenLabs text-to-speech is available in LiveKit Agents through LiveKit Inference and the ElevenLabs plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.

LiveKit Inference

Use LiveKit Inference to access ElevenLabs TTS without a separate ElevenLabs API key.

Model nameModel IDLanguages
Eleven Flash v2
elevenlabs/eleven_flash_v2
en
Eleven Flash v2.5
elevenlabs/eleven_flash_v2_5
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi
Eleven Multilingual v2
elevenlabs/eleven_multilingual_v2
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukru
Eleven Turbo v2
elevenlabs/eleven_turbo_v2
en
Eleven Turbo v2.5
elevenlabs/eleven_turbo_v2_5
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi

Usage

To use ElevenLabs, use the TTS class from the inference module:

from livekit.agents import AgentSession, inference
session = AgentSession(
tts=inference.TTS(
model="elevenlabs/eleven_turbo_v2_5",
voice="Xb7hH8MSUJpSbSDYk0k2",
language="en"
),
# ... llm, stt, vad, turn_handling, etc.
)
import { AgentSession, inference } from '@livekit/agents';
session = new AgentSession({
tts: new inference.TTS({
model: "elevenlabs/eleven_turbo_v2_5",
voice: "Xb7hH8MSUJpSbSDYk0k2",
language: "en"
}),
// ... llm, stt, vad, turnHandling, etc.
});

Parameters

model
Required
string

The model ID from the models list.

voice
Required
string

See voices for guidance on selecting a voice.

languageLanguageCode

Language code for the input text. If not set, the model default applies.

extra_kwargsdict

Additional parameters to pass to the ElevenLabs TTS API. See model parameters for supported fields.

In Node.js this parameter is called modelOptions.

Model parameters

Pass the following parameters inside extra_kwargs (Python) or modelOptions (Node.js):

ParameterTypeDefaultNotes
inactivity_timeoutint60Seconds of inactivity before the session closes. Valid range: 5180.
apply_text_normalization"auto" | "off" | "on""auto"Whether to normalize text before synthesis (expand numbers, abbreviations, etc.).
auto_modeboolEnable auto mode for optimized streaming. Reduces latency by automatically determining chunk sizes.
enable_loggingboolWhether to enable request logging on the ElevenLabs side.
enable_ssml_parsingboolWhether to parse SSML tags in the input text. Allows custom pronunciation using SSML.
sync_alignmentboolWhether to return word-level alignment data with each audio chunk.
language_codestrLanguage code for the output audio (for example, "en", "fr"). Overrides the top-level language parameter for the ElevenLabs API call.
stabilityfloatVoice stability. Higher values produce more consistent output; lower values allow more expressive variation. Valid range: 01.
similarity_boostfloatHow closely the output should match the original voice. Higher values increase similarity but might introduce artifacts. Valid range: 01.
stylefloatStyle strength of the voice. Valid range: 01.
speedfloatSpeaking speed. Valid range: 0.254.
use_speaker_boostboolWhether to boost speaker clarity and target speaker similarity.
chunk_length_schedulelist[float]List of chunk sizes in characters that controls when audio is flushed during streaming. Valid values per entry: 50500.
preferred_alignmentstrPreferred alignment mode for word-level timestamps.

Voices

LiveKit Inference supports all of the default voices available in the ElevenLabs API. You can explore the available voices in the ElevenLabs voice library (free account required), and use the voice by copying its ID into your LiveKit agent session.

Custom & community voices unavailable

Custom and community ElevenLabs voices, including voice cloning, are not yet supported in LiveKit Inference. To use these voices, create your own ElevenLabs account and use the ElevenLabs plugin for LiveKit Agents instead.

The following is a small sample of the ElevenLabs voices available in LiveKit Inference.

Alice

Clear and engaging, friendly British woman

🇬🇧
Chris

Natural and real American male

🇺🇸
Eric

A smooth tenor Mexican male

🇲🇽
Jessica

Young and popular, playful American female

🇺🇸

String descriptors

As a shortcut, you can also pass a descriptor with the model ID and voice directly to the tts argument in your AgentSession:

from livekit.agents import AgentSession
session = AgentSession(
tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
# ... llm, stt, vad, turn_handling, etc.
)
import { AgentSession } from '@livekit/agents';
session = new AgentSession({
tts: "elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
// ... tts, stt, vad, turnHandling, etc.
});

Plugin

LiveKit's plugin support for ElevenLabs lets you connect directly to ElevenLabs' TTS API with your own API key.

Available in
Python
|
Node.js

Installation

Install the plugin from PyPI:

uv add "livekit-agents[elevenlabs]~=1.4"
pnpm add @livekit/agents-plugin-elevenlabs@1.x

Authentication

The ElevenLabs plugin requires an ElevenLabs API key.

Set ELEVEN_API_KEY in your .env file.

Usage

Use ElevenLabs TTS within an AgentSession or as a standalone speech generator. For example, you can use this TTS in the Voice AI quickstart.

from livekit.plugins import elevenlabs
session = AgentSession(
tts=elevenlabs.TTS(
voice_id="ODq5zmih8GrVes37Dizd",
model="eleven_multilingual_v2"
)
# ... llm, stt, etc.
)
import * as elevenlabs from '@livekit/agents-plugin-elevenlabs';
const session = new voice.AgentSession({
tts: new elevenlabs.TTS(
voice: { id: "ODq5zmih8GrVes37Dizd" },
model: "eleven_multilingual_v2"
),
// ... llm, stt, etc.
});

Parameters

This section describes some of the parameters you can set when you create an ElevenLabs TTS. See the plugin reference links in the Additional resources section for a complete list of all available parameters.

modelstringDefault: eleven_flash_v2_5

ID of the model to use for generation. To learn more, see the ElevenLabs documentation.

voice_idstringDefault: EXAVITQu4vr4xnSDxMaL

ID of the voice to use for generation. To learn more, see the ElevenLabs documentation.

voice_settingsVoiceSettings

Voice configuration. To learn more, see the ElevenLabs documentation.

  • stabilityfloat
  • similarity_boostfloat
  • stylefloat
  • use_speaker_boostbool
  • speedfloat
languageLanguageCodeDefault: en

Language code for the output audio. To learn more, see the ElevenLabs documentation.

streaming_latencyintDefault: 3

Latency in seconds for streaming.

enable_ssml_parsingboolDefault: false

Enable Speech Synthesis Markup Language (SSML) parsing for input text. Set to true to customize pronunciation using SSML.

chunk_length_schedulelist[int]Default: [80, 120, 200, 260]

Schedule for chunk lengths. Valid values range from 50 to 500.

Customizing pronunciation

ElevenLabs supports custom pronunciation for specific words or phrases with SSML phoneme tags. This is useful to ensure correct pronunciation of certain words, even when missing from the voice's lexicon. To learn more, see Pronunciation.

Transcription timing

ElevenLabs TTS supports aligned transcription forwarding, which improves transcription synchronization in your frontend. Set use_tts_aligned_transcript=True in your AgentSession configuration to enable this feature. To learn more, see the docs.

Additional resources

The following resources provide more information about using ElevenLabs with LiveKit Agents.