Skip to main content

ElevenLabs TTS

How to use ElevenLabs TTS with LiveKit Agents.

Use in Agent Builder

Create a new agent in your browser using this model

Overview

ElevenLabs text-to-speech is available in LiveKit Agents through LiveKit Inference and the ElevenLabs plugin. Pricing for LiveKit Inference is available on the pricing page.

Model IDLanguages
elevenlabs/eleven_flash_v2
en
elevenlabs/eleven_flash_v2_5
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi
elevenlabs/eleven_turbo_v2
en
elevenlabs/eleven_turbo_v2_5
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukruhunovi
elevenlabs/eleven_multilingual_v2
enjazhdehifrkoptitesidnltrfilplsvbgroarcselfihrmsskdataukru

LiveKit Inference

Use LiveKit Inference to access ElevenLabs TTS without a separate ElevenLabs API key.

Usage

To use ElevenLabs, pass a descriptor with the model and voice to the tts argument in your AgentSession:

from livekit.agents import AgentSession
session = AgentSession(
tts="elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
# ... llm, stt, vad, turn_detection, etc.
)
import { AgentSession } from '@livekit/agents';
session = new AgentSession({
tts: "elevenlabs/eleven_turbo_v2_5:Xb7hH8MSUJpSbSDYk0k2",
// ... tts, stt, vad, turn_detection, etc.
});

Parameters

To customize additional parameters, use the TTS class from the inference module:

from livekit.agents import AgentSession, inference
session = AgentSession(
tts=inference.TTS(
model="elevenlabs/eleven_turbo_v2_5",
voice="Xb7hH8MSUJpSbSDYk0k2",
language="en"
),
# ... tts, stt, vad, turn_detection, etc.
)
import { AgentSession } from '@livekit/agents';
session = new AgentSession({
tts: new inference.TTS({
model: "elevenlabs/eleven_turbo_v2_5",
voice: "Xb7hH8MSUJpSbSDYk0k2",
language: "en"
}),
// ... tts, stt, vad, turn_detection, etc.
});
modelstringRequired

The model ID from the models list.

voicestringRequired

See voices for guidance on selecting a voice.

languagestringOptional

Language code for the input text. If not set, the model default applies.

extra_kwargsdictOptional

Additional parameters to pass to the ElevenLabs TTS API, including inactivity_timeout, and apply_text_normalization. See the provider's documentation for more information.

In Node.js this parameter is called modelOptions.

Voices

LiveKit Inference supports all of the default voices available in the ElevenLabs API. You can explore the available voices in the ElevenLabs voice library (free account required), and use the voice by copying its ID into your LiveKit agent session.

Custom & community voices unavailable

Custom and community ElevenLabs voices, including voice cloning, are not yet supported in LiveKit Inference. To use these voices, create your own ElevenLabs account and use the ElevenLabs plugin for LiveKit Agents instead.

The following is a small sample of the ElevenLabs voices available in LiveKit Inference.

Alice

Clear and engaging, friendly British woman

🇬🇧
Chris

Natural and real American male

🇺🇸
Eric

A smooth tenor Mexican male

🇲🇽
Jessica

Young and popular, playful American female

🇺🇸

Customizing pronunciation

ElevenLabs supports custom pronunciation for specific words or phrases with SSML phoneme tags. This is useful to ensure correct pronunciation of certain words, even when missing from the voice's lexicon. To learn more, see Pronunciation.

Transcription timing

ElevenLabs TTS supports aligned transcription forwarding, which improves transcription synchronization in your frontend. Set use_tts_aligned_transcript=True in your AgentSession configuration to enable this feature. To learn more, see the docs.

Plugin

Use the ElevenLabs plugin to connect directly to ElevenLabs' TTS API with your own API key.

Available in
Python
|
Node.js

Installation

Install the plugin from PyPI:

uv add "livekit-agents[elevenlabs]~=1.4"
pnpm add @livekit/agents-plugin-elevenlabs@1.x

Authentication

The ElevenLabs plugin requires an ElevenLabs API key.

Set ELEVEN_API_KEY in your .env file.

Usage

Use ElevenLabs TTS within an AgentSession or as a standalone speech generator. For example, you can use this TTS in the Voice AI quickstart.

from livekit.plugins import elevenlabs
session = AgentSession(
tts=elevenlabs.TTS(
voice_id="ODq5zmih8GrVes37Dizd",
model="eleven_multilingual_v2"
)
# ... llm, stt, etc.
)
import * as elevenlabs from '@livekit/agents-plugin-elevenlabs';
const session = new voice.AgentSession({
tts: new elevenlabs.TTS(
voice: { id: "ODq5zmih8GrVes37Dizd" },
model: "eleven_multilingual_v2"
),
// ... llm, stt, etc.
});

Parameters

This section describes some of the parameters you can set when you create an ElevenLabs TTS. See the plugin reference links in the Additional resources section for a complete list of all available parameters.

modelstringOptionalDefault: eleven_flash_v2_5

ID of the model to use for generation. To learn more, see the ElevenLabs documentation.

voice_idstringOptionalDefault: EXAVITQu4vr4xnSDxMaL

ID of the voice to use for generation. To learn more, see the ElevenLabs documentation.

voice_settingsVoiceSettingsOptional

Voice configuration. To learn more, see the ElevenLabs documentation.

  • stabilityfloatOptional
  • similarity_boostfloatOptional
  • stylefloatOptional
  • use_speaker_boostboolOptional
  • speedfloatOptional
languagestringOptionalDefault: en

Language of output audio in ISO-639-1 format. To learn more, see the ElevenLabs documentation.

streaming_latencyintOptionalDefault: 3

Latency in seconds for streaming.

enable_ssml_parsingboolOptionalDefault: false

Enable Speech Synthesis Markup Language (SSML) parsing for input text. Set to true to customize pronunciation using SSML.

chunk_length_schedulelist[int]OptionalDefault: [80, 120, 200, 260]

Schedule for chunk lengths. Valid values range from 50 to 500.

Additional resources

The following resources provide more information about using ElevenLabs with LiveKit Agents.