Create a new agent in your browser using this model
Overview
DeepSeek models are available in LiveKit Agents through LiveKit Inference and the DeepSeek plugin. Pricing for LiveKit Inference is available on the pricing page.
| Model name | Model ID | Providers |
|---|---|---|
| DeepSeek V3 | deepseek-ai/deepseek-v3 | baseten |
| DeepSeek V3.2 | deepseek-ai/deepseek-v3.2 | baseten |
LiveKit Inference
Use LiveKit Inference to access DeepSeek models without a separate DeepSeek API key.
Usage
To use DeepSeek, use the LLM class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(llm=inference.LLM(model="deepseek-ai/deepseek-v3",provider="baseten",extra_kwargs={"max_completion_tokens": 1000}),# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession, inference } from '@livekit/agents';session = new AgentSession({llm: new inference.LLM({model: "deepseek-ai/deepseek-v3",provider: "baseten",modelOptions: {max_completion_tokens: 1000}}),// ... tts, stt, vad, turn_detection, etc.});
Parameters
stringRequiredThe model ID from the models list.
stringOptionalSet a specific provider to use for the LLM. Refer to the models list for available providers. If not set, LiveKit Inference uses the best available provider, and bills accordingly.
dictOptionalAdditional parameters to pass to the provider's Chat Completions API, such as max_completion_tokens. See the provider's documentation for more information.
In Node.js this parameter is called modelOptions.
String descriptors
As a shortcut, you can also pass a model descriptor string directly to the llm argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(llm="deepseek-ai/deepseek-v3",# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({llm: "deepseek-ai/deepseek-v3",// ... tts, stt, vad, turn_detection, etc.});
Plugin
Use the DeepSeek plugin to connect directly to DeepSeek's API with your own API key.
Usage
Use the OpenAI plugin's with_deepseek method to set the default agent session LLM to DeepSeek:
uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x
Set the following environment variable in your .env file:
DEEPSEEK_API_KEY=<your-deepseek-api-key>
from livekit.plugins import openaisession = AgentSession(llm=openai.LLM.with_deepseek(model="deepseek-chat", # this is DeepSeek-V3),)
import * as openai from '@livekit/agents-plugin-openai';const session = new voice.AgentSession({llm: openai.LLM.withDeepSeek({model: "deepseek-chat", // this is DeepSeek-V3})});
Parameters
This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference links in the Additional resources section.
str | DeepSeekChatModelsOptionalDefault: deepseek-chatDeepSeek model to use. See models and pricing for a complete list.
floatOptionalDefault: 1.0Controls the randomness of the model's output. Higher values, for example 0.8, make the output more random, while lower values, for example 0.2, make it more focused and deterministic.
Valid values are between 0 and 2.
boolOptionalControls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.
ToolChoice | Literal['auto', 'required', 'none']OptionalDefault: autoControls how the model uses tools. Set to 'auto' to let the model decide, 'required' to force tool usage, or 'none' to disable tool usage.
Additional resources
The following links provide more information about the DeepSeek LLM integration.