Overview
This plugin allows you to use Perplexity as an LLM provider for your voice agents. Perplexity compatibility is provided by the OpenAI plugin using the Perplexity Chat Completions API.
Usage
Install the OpenAI plugin to add Perplexity support:
uv add "livekit-agents[openai]~=1.4"
pnpm add @livekit/agents-plugin-openai@1.x
Set the following environment variable in your .env file:
PERPLEXITY_API_KEY=<your-perplexity-api-key>
Create a Perplexity LLM using the with_perplexity method:
from livekit.plugins import openaisession = AgentSession(llm=openai.LLM.with_perplexity(model="llama-3.1-sonar-small-128k-chat",\ ),# ... tts, stt, vad, turn_handling, etc.)
import * as openai from '@livekit/agents-plugin-openai';const session = new voice.AgentSession({llm: openai.LLM.withPerplexity({model: "llama-3.1-sonar-small-128k-chat",}),// ... tts, stt, vad, turnHandling, etc.});
Parameters
This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference links in the Additional resources section.
modelstr | PerplexityChatModelsDefault: llama-3.1-sonar-small-128k-chatModel to use for inference. To learn more, see supported models.
temperaturefloatDefault: 1.0Sampling temperature that controls the randomness of the model's output. Higher values make the output more random, while lower values make it more focused and deterministic. Range of valid values can vary by model.
Valid values are between 0 and 2.
parallel_tool_callsboolControls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.
tool_choiceToolChoice | Literal['auto', 'required', 'none']Default: autoControls how the model uses tools. String options are as follows:
'auto': Let the model decide.'required': Force tool usage.'none': Disable tool usage.
Additional resources
The following links provide more information about the Perplexity LLM integration.