Skip to main content

xAI LLM

How to use xAI's Grok models with LiveKit Agents.

Use in Agent Builder

Create a new agent in your browser using this model

Overview

xAI's Grok models are available in LiveKit Agents through LiveKit Inference and the xAI plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.

LiveKit Inference

Use LiveKit Inference to access Grok models without a separate xAI API key.

Model nameModel IDProviders
Grok 4.1 Fast
xai/grok-4-1-fast-non-reasoning
xai
Grok 4.1 Fast Reasoning
xai/grok-4-1-fast-reasoning
xai
Grok 4.20
xai/grok-4.20-0309-non-reasoning
xai
Grok 4.20 Reasoning
xai/grok-4.20-0309-reasoning
xai
Grok 4.20 Multi-Agent
xai/grok-4.20-multi-agent-0309
xai

Usage

To use Grok, use the LLM class from the inference module. You can use this LLM in the Voice AI quickstart:

from livekit.agents import AgentSession, inference
session = AgentSession(
llm=inference.LLM(
model="xai/grok-4-1-fast-non-reasoning",
extra_kwargs={
"max_completion_tokens": 1000
}
),
# ... tts, stt, vad, turn_handling, etc.
)
import { AgentSession, inference } from '@livekit/agents';
const session = new AgentSession({
llm: new inference.LLM({
model: "xai/grok-4-1-fast-non-reasoning",
modelOptions: {
max_completion_tokens: 1000
}
}),
// ... tts, stt, vad, turnHandling, etc.
});

Parameters

The following are parameters for configuring Grok models with LiveKit Inference. For model behavior parameters like temperature and max_completion_tokens, see model parameters.

model
Required
string

The model ID from the models list.

providerstring

Set a specific provider to use for the LLM. Refer to the models list for available providers. If not set, LiveKit Inference uses the best available provider, and bills accordingly.

extra_kwargsdict

Additional parameters to pass to the xAI Chat Completions API, such as max_completion_tokens or temperature. See model parameters for supported fields.

In Node.js this parameter is called modelOptions.

Model parameters

Pass the following parameters inside extra_kwargs (Python) or modelOptions (Node.js). For more details about each parameter in the list, see Inference parameters.

ParameterTypeDefaultNotes
temperature float1Controls the randomness of the model's output. Valid range: 0-2.
top_pfloat1Alternative to temperature. Model considers the results of the tokens with top_p probability mass. Valid range: 0-1.
max_completion_tokensintThe maximum number of tokens that can be generated in the chat completion.
frequency_penaltyfloat0Positive values decrease the model's likelihood to repeat the same line verbatim. Valid range: -2.0-2.0. Not supported by reasoning models.
presence_penaltyfloat0Positive values increase the model's likelihood to talk about new topics. Valid range: -2.0-2.0. Not supported by grok-3 or reasoning models.
stopstr | list[str]Up to 4 string sequences (for example, ["\n"]) that cause the API to stop generating further tokens.
logprobsboolIf true, returns the log probabilities of each output token.

top_logprobs

int

Number of most likely tokens to return at each token position with associated log probability.

Requires logprobs: true.

seedintIf specified, xAI will make a best effort to sample deterministically for repeated requests with the same seed and parameters.
parallel_tool_callsbooltrueWhether the model is allowed to call multiple tools simultaneously.
tool_choiceToolChoice | Literal['auto', 'required', 'none']"auto"Controls how the model uses tools.

String descriptors

As a shortcut, you can also pass a model ID string directly to the llm argument in your AgentSession:

from livekit.agents import AgentSession
session = AgentSession(
llm="xai/grok-4-1-fast-non-reasoning",
# ... tts, stt, vad, turn_handling, etc.
)
import { AgentSession } from '@livekit/agents';
const session = new AgentSession({
llm: "xai/grok-4-1-fast-non-reasoning",
// ... tts, stt, vad, turnHandling, etc.
});

Plugin

LiveKit's plugin support for xAI lets you connect directly to xAI's API with your own API key. The Python plugin uses the Responses API, which supports xAI provider tools (WebSearch, FileSearch, XSearch).

The Node.js plugin uses xAI's Chat Completions endpoint via the OpenAI plugin. It does not support the Responses API or provider tools.

Available in
Python
|
Node.js

Installation

Install the xAI plugin to add xAI support:

uv add "livekit-agents[xai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x

Authentication

Set the following environment variable in your .env file:

XAI_API_KEY=<your-xai-api-key>

Usage

Use xAI within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart

from livekit.plugins import xai
# Use Responses API (recommended)
session = AgentSession(
llm=xai.responses.LLM(
model="grok-4-1-fast-non-reasoning",
),
# ... tts, stt, vad, turn_handling, etc.
)
import * as openai from '@livekit/agents-plugin-openai';
const session = new voice.AgentSession({
llm: openai.LLM.withXAI({
model: "grok-3",
}),
// ... tts, stt, vad, turnHandling, etc.
});

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference links in the Additional resources section.

modelstrDefault: grok-4-1-fast-non-reasoning

Grok model to use. To learn more, see the xAI Grok models page.

temperaturefloatDefault: 1.0

Sampling temperature that controls the randomness of the model's output. Higher values make the output more random, while lower values make it more focused and deterministic. Range of valid values can vary by model.

Valid values are between 0 and 2. To learn more, see the optional parameters for Responses

parallel_tool_callsbool

Controls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.

tool_choiceToolChoice | Literal['auto', 'required', 'none']Default: auto

Controls how the model uses tools. String options are as follows:

  • 'auto': Let the model decide.
  • 'required': Force tool usage.
  • 'none': Disable tool usage.

Provider tools

ONLY Available in
Python

xAI supports the following provider tools that enable the model to use built-in capabilities executed on the model server. These tools can be used alongside function tools defined in your agent's codebase. Provider tools work with both the Responses API and the Grok Voice Agent API.

ToolDescriptionParameters
XSearchSearch X (Twitter) posts.allowed_x_handles
WebSearchSearch the web and browse pages.None
FileSearchSearch uploaded document collections.vector_store_ids (required), max_num_results
from livekit.plugins import xai
agent = MyAgent(
llm=xai.responses.LLM(),
tools=[xai.XSearch(), xai.WebSearch()], # replace with any supported provider tool
)

Additional resources

The following links provide more information about the xAI Grok LLM integration.