Create a new agent in your browser using this model
Overview
DeepSeek models are available in LiveKit Agents through LiveKit Inference and the DeepSeek plugin. With LiveKit Inference, your agent runs on LiveKit's infrastructure to minimize latency. No separate provider API key is required, and usage and rate limits are managed through LiveKit Cloud. Use the plugin instead if you want to manage your own billing and rate limits. Pricing for LiveKit Inference is available on the pricing page.
LiveKit Inference
Use LiveKit Inference to access DeepSeek models without a separate DeepSeek API key.
| Model name | Model ID | Providers |
|---|---|---|
DeepSeek-V3 | deepseek-ai/deepseek-v3 | baseten |
DeepSeek-V3.2 Retired | deepseek-ai/deepseek-v3.2 | deepseek |
Usage
To use DeepSeek, use the LLM class from the inference module. You can use this LLM in the Voice AI quickstart:
from livekit.agents import AgentSession, inferencesession = AgentSession(llm=inference.LLM(model="deepseek-ai/deepseek-v3",provider="baseten",extra_kwargs={"max_tokens": 1000}),# ... tts, stt, vad, turn_handling, etc.)
import { AgentSession, inference } from '@livekit/agents';const session = new AgentSession({llm: new inference.LLM({model: "deepseek-ai/deepseek-v3",provider: "baseten",modelOptions: {max_tokens: 1000}}),// ... tts, stt, vad, turnHandling, etc.});
Parameters
The following are parameters for configuring DeepSeek models with LiveKit Inference. For model behavior parameters like temperature and max_tokens, see model parameters.
modelstringThe model ID from the models list.
providerstringSet a specific provider to use for the LLM. Refer to the models list for available providers. If not set, LiveKit Inference uses the best available provider, and bills accordingly.
extra_kwargsdictAdditional parameters to pass to the provider's Chat Completions API, such as max_tokens or temperature. See model parameters for supported fields.
In Node.js this parameter is called modelOptions.
Model parameters
Pass the following parameters inside extra_kwargs (Python) or modelOptions (Node.js). For more details about each parameter in the list, see Inference parameters.
| Parameter | Type | Default | Notes |
|---|---|---|---|
temperature | float | 1 | Controls the randomness of the model's output. Valid range: 0-2. |
top_p | float | 1 | Alternative to temperature. Model considers the results of the tokens with top_p probability mass. Valid range: 0-1. |
max_tokens | int | The maximum number of tokens that can be generated in the chat completion. | |
frequency_penalty | float | 0 | Positive values decrease the model's likelihood to repeat the same line verbatim. Valid range: -2.0-2.0. |
presence_penalty | float | 0 | Positive values increase the model's likelihood to talk about new topics. Valid range: -2.0-2.0. |
stop | str | list[str] | List of up to 16 string sequences (for example, ["\n"]) that cause the API to stop generating further tokens. | |
logprobs | bool | If true, returns the log probabilities of each output token. | |
|
| Number of most likely tokens to return at each token position with associated log probability. Requires | |
tool_choice | ToolChoice | Literal['auto', 'required', 'none'] | "auto" | Controls how the model uses tools. |
String descriptors
As a shortcut, you can also pass a model ID string directly to the llm argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(llm="deepseek-ai/deepseek-v3",# ... tts, stt, vad, turn_handling, etc.)
import { AgentSession } from '@livekit/agents';const session = new AgentSession({llm: "deepseek-ai/deepseek-v3",// ... tts, stt, vad, turnHandling, etc.});
Plugin
LiveKit's plugin support for DeepSeek lets you connect directly to DeepSeek's API with your own API key.
Usage
Use the OpenAI plugin's with_deepseek method to set the default agent session LLM to DeepSeek:
uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x
Set the following environment variable in your .env file:
DEEPSEEK_API_KEY=<your-deepseek-api-key>
You can use this LLM in the Voice AI quickstart:
from livekit.plugins import openaisession = AgentSession(llm=openai.LLM.with_deepseek(model="deepseek-chat", # this is DeepSeek-V3),)
import * as openai from '@livekit/agents-plugin-openai';const session = new voice.AgentSession({llm: openai.LLM.withDeepSeek({model: "deepseek-chat", // this is DeepSeek-V3})});
Parameters
This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference links in the Additional resources section.
modelstr | DeepSeekChatModelsDefault: deepseek-chatDeepSeek model to use. See models and pricing for a complete list.
temperaturefloatDefault: 1.0Sampling temperature that controls the randomness of the model's output. Higher values make the output more random, while lower values make it more focused and deterministic. Range of valid values can vary by model.
Valid values are between 0 and 2.
parallel_tool_callsboolControls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.
tool_choiceToolChoice | Literal['auto', 'required', 'none']Default: autoControls how the model uses tools. String options are as follows:
'auto': Let the model decide.'required': Force tool usage.'none': Disable tool usage.
Additional resources
The following links provide more information about the DeepSeek LLM integration.