Skip to main content

DeepSeek LLM

How to use DeepSeek models with LiveKit Agents.

Use in Agent Builder

Create a new agent in your browser using this model

Overview

DeepSeek models are available in LiveKit Agents through LiveKit Inference and the DeepSeek plugin. Pricing for LiveKit Inference is available on the pricing page.

Model nameModel IDProviders
DeepSeek V3deepseek-ai/deepseek-v3
baseten
DeepSeek V3.2deepseek-ai/deepseek-v3.2
baseten

LiveKit Inference

Use LiveKit Inference to access DeepSeek models without a separate DeepSeek API key.

Usage

To use DeepSeek, use the LLM class from the inference module:

from livekit.agents import AgentSession, inference
session = AgentSession(
llm=inference.LLM(
model="deepseek-ai/deepseek-v3",
provider="baseten",
extra_kwargs={
"max_completion_tokens": 1000
}
),
# ... tts, stt, vad, turn_detection, etc.
)
import { AgentSession, inference } from '@livekit/agents';
session = new AgentSession({
llm: new inference.LLM({
model: "deepseek-ai/deepseek-v3",
provider: "baseten",
modelOptions: {
max_completion_tokens: 1000
}
}),
// ... tts, stt, vad, turn_detection, etc.
});

Parameters

modelstringRequired

The model ID from the models list.

providerstringOptional

Set a specific provider to use for the LLM. Refer to the models list for available providers. If not set, LiveKit Inference uses the best available provider, and bills accordingly.

extra_kwargsdictOptional

Additional parameters to pass to the provider's Chat Completions API, such as max_completion_tokens. See the provider's documentation for more information.

In Node.js this parameter is called modelOptions.

String descriptors

As a shortcut, you can also pass a model descriptor string directly to the llm argument in your AgentSession:

from livekit.agents import AgentSession
session = AgentSession(
llm="deepseek-ai/deepseek-v3",
# ... tts, stt, vad, turn_detection, etc.
)
import { AgentSession } from '@livekit/agents';
session = new AgentSession({
llm: "deepseek-ai/deepseek-v3",
// ... tts, stt, vad, turn_detection, etc.
});

Plugin

Use the DeepSeek plugin to connect directly to DeepSeek's API with your own API key.

Available in
Python
|
Node.js

Usage

Use the OpenAI plugin's with_deepseek method to set the default agent session LLM to DeepSeek:

uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x

Set the following environment variable in your .env file:

DEEPSEEK_API_KEY=<your-deepseek-api-key>
from livekit.plugins import openai
session = AgentSession(
llm=openai.LLM.with_deepseek(
model="deepseek-chat", # this is DeepSeek-V3
),
)
import * as openai from '@livekit/agents-plugin-openai';
const session = new voice.AgentSession({
llm: openai.LLM.withDeepSeek({
model: "deepseek-chat", // this is DeepSeek-V3
})
});

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference links in the Additional resources section.

modelstr | DeepSeekChatModelsOptionalDefault: deepseek-chat

DeepSeek model to use. See models and pricing for a complete list.

temperaturefloatOptionalDefault: 1.0

Controls the randomness of the model's output. Higher values, for example 0.8, make the output more random, while lower values, for example 0.2, make it more focused and deterministic.

Valid values are between 0 and 2.

parallel_tool_callsboolOptional

Controls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.

tool_choiceToolChoice | Literal['auto', 'required', 'none']OptionalDefault: auto

Controls how the model uses tools. Set to 'auto' to let the model decide, 'required' to force tool usage, or 'none' to disable tool usage.

Additional resources

The following links provide more information about the DeepSeek LLM integration.