Overview
This plugin allows you to use Cerebras as an LLM provider for your voice agents. Both the Python and Node.js plugins include built-in payload optimization via gzip compression and msgpack encoding for reduced TTFT on large prompts.
Some Cerebras models are also available in LiveKit Inference, with billing and integration handled automatically. See the docs for more information.
Usage
Install the plugin:
uv add "livekit-agents[cerebras]~=1.5"
pnpm add @livekit/agents-plugin-cerebras@1.x
Set the following environment variable in your .env file:
CEREBRAS_API_KEY=<your-cerebras-api-key>
Create a Cerebras LLM:
from livekit.plugins import cerebrassession = AgentSession(llm=cerebras.LLM(model="llama3.1-8b",),# ... tts, stt, vad, turn_handling, etc.)
import { LLM } from '@livekit/agents-plugin-cerebras';const session = new voice.AgentSession({llm: new LLM({model: "llama3.1-8b",}),// ... tts, stt, vad, turnHandling, etc.});
Parameters
This section describes some of the available parameters. See the plugin reference links in the Additional resources section for a complete list of all available parameters.
modelstr | CerebrasChatModelsDefault: llama3.1-8bModel to use for inference. To learn more, see supported models.
gzip_compressionboolDefault: trueWhen enabled, request payloads are gzip-compressed before sending, which can reduce TTFT for requests with large prompts. To learn more, see payload optimization.
msgpack_encodingboolDefault: trueWhen enabled, request payloads are encoded with msgpack binary format instead of JSON for additional payload size reduction.
temperaturefloatDefault: 1.0Sampling temperature that controls the randomness of the model's output. Higher values make the output more random, while lower values make it more focused and deterministic. Range of valid values can vary by model.
Valid values are between 0 and 1.5. To learn more, see the Cerebras documentation.
parallel_tool_callsboolControls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.
tool_choiceToolChoice | Literal['auto', 'required', 'none']Default: autoControls how the model uses tools. String options are as follows:
'auto': Let the model decide.'required': Force tool usage.'none': Disable tool usage.
Additional resources
The following links provide more information about the Cerebras LLM integration.