Overview
Perplexity provides access to their Sonar models, which are based on Llama 3.1 but fine-tuned for search, through their inference API. These models are multilingual and text-only, making them suitable for a variety of agent applications.
Usage
Install the OpenAI plugin to add Perplexity support:
pip install "livekit-agents[openai]~=1.0"
Set the following environment variable in your .env
file:
PERPLEXITY_API_KEY=<your-perplexity-api-key>
Create a Perplexity LLM using the with_perplexity
method:
from livekit.plugins import openaisession = AgentSession(llm=openai.LLM.with_perplexity(model="llama-3.1-sonar-small-128k-chat",temperature=0.7),# ... tts, stt, vad, turn_detection, etc.)
Parameters
This section describes some of the available parameters. For a complete reference of all available parameters, see the method reference.
Model to use for inference. To learn more, see supported models.
Controls the randomness of the model's output. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic.
Valid values are between 0
and 2
.
Controls whether the model can make multiple tool calls in parallel. When enabled, the model can make multiple tool calls simultaneously, which can improve performance for complex tasks.
Controls how the model uses tools. Set to 'auto' to let the model decide, 'required' to force tool usage, or 'none' to disable tool usage.
Links
The following links provide more information about the Perplexity LLM integration.
Python package
The livekit-plugins-openai
package on PyPI.
Plugin reference
Reference for the with_perplexity
method of the OpenAI LLM plugin.
GitHub repo
View the source or contribute to the LiveKit OpenAI LLM plugin.
Perplexity docs
Perplexity API documentation.
Voice AI quickstart
Get started with LiveKit Agents and Perplexity.