Aura-1 is scheduled for full retirement on February 13, 2026 at 5 PM PST. We recommend that you migrate to Aura-2.
Overview
LiveKit Inference offers voice models powered by Deepgram. Pricing information is available on the pricing page.
| Model ID | Languages |
|---|---|
| deepgram/aura | en |
| deepgram/aura-2 | enes |
Usage
To use Deepgram, pass a descriptor with the model and voice to the tts argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(tts="deepgram/aura-2:athena",# ... stt, llm, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: "deepgram/aura-2:athena",// ... stt, llm, vad, turn_detection, etc.});
Parameters
To customize additional parameters, use the TTS class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(tts=inference.TTS(model="deepgram/aura-2",voice="athena",language="en"),# ... stt, llm, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: new inference.TTS({model: "deepgram/aura-2",voice: "athena",language: "en"}),// ... stt, llm, vad, turn_detection, etc.});
stringRequiredThe model ID from the models list.
stringRequiredSee voices for guidance on selecting a voice.
stringOptionalLanguage code for the input text. If not set, the model default applies.
dictOptionalAdditional parameters to pass to the Deepgram TTS API. See the provider's documentation for more information.
In Node.js this parameter is called modelOptions.
Voices
LiveKit Inference supports Deepgram Aura voices. You can explore the available voices in the Deepgram voice library, and use the voice by copying its name into your LiveKit agent session.
The following is a small sample of the Deepgram voices available in LiveKit Inference.
Additional resources
The following links provide more information about Deepgram in LiveKit Inference.