Create a new agent in your browser using this model
Overview
xAI TTS is available in LiveKit Agents through LiveKit Inference and the xAI plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.
LiveKit Inference
Use LiveKit Inference to access xAI TTS without a separate xAI API key.
| Model name | Model ID | Languages |
|---|---|---|
Text to Speech | xai/tts-1 | autoenar-EGar-SAar-AEbnzhfrdehiiditjakopt-BRpt-PTrues-MXes-EStrvi |
Usage
To use xAI, use the TTS class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(tts=inference.TTS(model="xai/tts-1",voice="ara",language="en"),# ... stt, llm, vad, turn_handling, etc.)
import { AgentSession, inference } from '@livekit/agents';const session = new AgentSession({tts: new inference.TTS({model: "xai/tts-1",voice: "ara",}),// ... stt, llm, vad, turnHandling, etc.});
Parameters
modelstringThe model ID from the models list.
voicestringThe voice ID used for speech generation. For guidance on selecting a voice, see voices.
languagestringLanguage code for the input text. If not set, the model default applies.
Voices
The following is a sample of the xAI voices available in LiveKit Inference. For a complete list of voices, see the xAI TTS documentation.
String descriptors
As a shortcut, you can also pass a descriptor with the model ID and voice directly to the tts argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(tts="xai/tts-1:ara",# ... llm, stt, vad, turn_handling, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({tts: "xai/tts-1:ara",// ... llm, stt, vad, turnHandling, etc.});
Plugin
Use the xAI plugin to connect directly to xAI's TTS API with your own API key. For Node.js, use LiveKit Inference instead.
Installation
Install the plugin from PyPI:
uv add "livekit-agents[xai]~=1.4"
Authentication
The xAI plugin requires an xAI API key.
Set XAI_API_KEY in your .env file.
Usage
Use xAI TTS in an AgentSession or as a standalone speech generator. For example, you can use this TTS in the Voice AI quickstart.
from livekit.plugins import xaisession = AgentSession(tts = xai.TTS(voice="ara",),# ... llm, stt, etc.)
Parameters
This section describes some of the available parameters. See the plugin reference links in the Additional resources section for a complete list of all available parameters.
voiceGrokVoices | stringDefault: araThe voice ID used for speech generation.
Speech tags
xAI TTS supports speech tags that add expressive delivery to synthesized speech. There are two categories of tags:
- Inline tags insert a vocal expression at a specific point in the text. For example, a
[pause]tag can create a short pause, while a[laugh]tag can add laughter. - Wrapping tags modify how a section of text is delivered. For example,
<whisper>a secret</whisper>delivers the text in a whispered voice, while<slow>text</slow>is spoken at a slower pace.
Tags can be combined for layered effects:
So I walked in and [pause] there it was. [laugh]I need to tell you something. <whisper>It is a secret.</whisper><slow><soft>Goodnight, sleep well.</soft></slow>
For a complete list of tags and examples, see the xAI speech tags documentation.
Additional resources
The following resources provide more information about using xAI with LiveKit Agents.