Overview
LiveKit Inference offers voice models powered by Deepgram. Pricing information is available on the pricing page.
| Model ID | Languages |
|---|---|
| deepgram/aura | en |
| deepgram/aura-2 | enes |
Usage
To use Deepgram, pass a descriptor with the model and voice to the tts argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(tts="deepgram/aura-2:athena",# ... stt, llm, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: "deepgram/aura-2:athena",// ... stt, llm, vad, turn_detection, etc.});
Parameters
To customize additional parameters, use the TTS class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(tts=inference.TTS(model="deepgram/aura-2",voice="athena",language="en"),# ... stt, llm, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: new inference.TTS({model: "deepgram/aura-2",voice: "athena",language: "en"}),// ... stt, llm, vad, turn_detection, etc.});
The model ID from the models list.
Language code for the input text. If not set, the model default applies.
Additional parameters to pass to the Deepgram TTS API. See the provider's documentation for more information.
In Node.js this parameter is called modelOptions.
Voices
LiveKit Inference supports Deepgram Aura voices. You can explore the available voices in the Deepgram voice library, and use the voice by copying its name into your LiveKit agent session.
The following is a small sample of the Deepgram voices available in LiveKit Inference.
Use the keyboard and arrows to audition voices
Use the keyboard and arrows to audition voices
Additional resources
The following links provide more information about Deepgram in LiveKit Inference.