Overview
This guide walks you through the setup of your very first voice assistant using LiveKit Agents. In less than 10 minutes, you'll have a voice assistant that you can speak to in your terminal, browser, telephone, or native app.
The LiveKit Agent Builder is a quick way to get started with voice agents in your browser, without writing any code. It's perfect for prototyping and exploring ideas, but doesn't have as many features as the full LiveKit Agents SDK. See the Agent Builder guide for more details.
Coding agent support
LiveKit is built for coding agents like Claude Code, Cursor, and Codex. These agents can build agents and frontends with the LiveKit SDKs and manage resources with the LiveKit CLI. Give your agent LiveKit expertise using the LiveKit CLI or Docs MCP server. For more information, see the coding agents guide.
Starter projects
The simplest way to get your first agent running is with one of the following starter projects. You can create a project from a template with the CLI (see Quick start with CLI) or click "Use this template" on GitHub and follow the project's README.
These projects are constructed with best practices, a complete working agent, tests, and an AGENTS.md optimized to turn coding agents like Claude Code and Cursor into LiveKit experts.
Python starter project
Ready-to-go Python starter project. Clone a repo with all the code you need to get started.
Node.js starter project
Ready-to-go Node.js starter project. Clone a repo with all the code you need to get started.
Requirements
The following sections describe the minimum requirements to get started with LiveKit Agents.
- LiveKit Agents requires Python >= 3.10.
- This guide uses the uv package manager.
- LiveKit Agents for Node.js requires Node.js >= 20.
- This guide uses pnpm package manager and requires pnpm >= 10.15.0.
LiveKit Cloud
This guide assumes you have signed up for a free LiveKit Cloud account. LiveKit Cloud includes agent deployment, model inference, and realtime media transport. Create a free project and use the API keys in the following steps to get started.
While this guide assumes LiveKit Cloud, the instructions can be adapted for self-hosting the open-source LiveKit server instead. For self-hosting in production, set up a custom deployment environment, and make the following changes: remove the enhanced noise cancellation plugin from the agent code, and use plugins for your own AI providers.
LiveKit CLI
Use the LiveKit CLI to manage LiveKit API keys and deploy your agent to LiveKit Cloud.
Install the LiveKit CLI:
Install the LiveKit CLI with Homebrew:
brew install livekit-clicurl -sSL https://get.livekit.io/cli | bashTipYou can also download the latest precompiled binaries here.
winget install LiveKit.LiveKitCLITipYou can also download the latest precompiled binaries here.
This repo uses Git LFS for embedded video resources. Please ensure git-lfs is installed on your machine before proceeding.
git clone github.com/livekit/livekit-climake installLink your LiveKit Cloud project to the CLI:
lk cloud authThis opens a browser window to authenticate and link your project to the CLI.
Quickstart steps
The following sections walk you through the steps to get your first agent running.
Setup with CLI
The simplest way to get your first agent running is with the LiveKit CLI.
Make sure your project meets all requirements, then run:
lk agent init my-agent --template agent-starter-python
lk agent init my-agent --template agent-starter-node
The CLI clones the template into the my-agent directory, creates an .env.local file with your LiveKit credentials, and prints the next steps to run your agent.
Save the link provided by the CLI after the line "To chat with your running agent, visit" for later use.
Follow the instructions it prints, which guide you through the following steps:
Select a project to use — If you don't have a default project set, the CLI prompts you to select a project to use.
Change into the project directory — The project directory is named after your agent.
cd my-agentInstall dependencies — Install the agent's runtime and plugin dependencies.
uv syncpnpm installDownload model files — Required for the Silero VAD, turn detector, and noise cancellation plugins.
uv run src/agent.py download-filespnpm download-filesRun your agent — Run your agent in development mode.
uv run src/agent.py devpnpm dev
Speak to your agent
Open a browser and visit the link you saved earlier from the CLI output to speak to your agent.
Other options
You can customize your agent by choosing different AI models and by exploring testing and deployment options.
AI models
Voice agents require one or more AI models to provide understanding, intelligence, and speech. LiveKit Agents supports both high-performance STT-LLM-TTS voice pipelines constructed from multiple specialized models, as well as realtime models with direct speech-to-speech capabilities.
Your agent strings together three specialized providers into a high-performance voice pipeline powered by LiveKit Inference. No additional setup is required.
| Component | Model | Alternatives |
|---|---|---|
| STT | Deepgram Nova-3 | STT models |
| LLM | OpenAI GPT-4.1 mini | LLM models |
| TTS | Cartesia Sonic-3 | TTS models |
Your agent uses a single realtime model to provide an expressive and lifelike voice experience.
| Model | Required Key | Alternatives |
|---|---|---|
| OpenAI Realtime API | OPENAI_API_KEY | Realtime models |
You can change the AI models used by editing your agent file. Full agent files for STT-LLM-TTS and Realtime models can be found in the Agent code section.
Test and deploy
Use different modes and deployment options to test and deploy your agent.
Server startup modes
Start your agent server in development or production modes.
consolemode: For Python only, runs your agent locally in your terminal.devmode: Run your agent in development mode for testing and debugging.startmode: Run your agent in production mode.
To learn more about these modes, see the Server startup modes reference.
For Python agents, run the following command to start your agent in production mode:
uv run agent.py start
The Node.js starter includes build and start scripts. To run in production mode:
pnpm buildpnpm start
Connect to playground
Start your agent in dev mode to connect it to LiveKit and make it available from anywhere on the internet:
uv run src/agent.py dev
pnpm dev
Use the Agents playground to speak with your agent and explore its full range of multimodal capabilities. Note that you'll need to set the Agent name, which should be my-agent for this quickstart.
Deploy to LiveKit Cloud
Run lk agent create from the project directory to register and deploy.
After the deployment completes, you can access your agent in the playground, or continue to use the console mode as you build and test your agent locally.
Agent code
Once you have the quickstart running, you can dig into the agent code. For the difference between realtime and chained (STT-LLM-TTS) pipelines, see AI models. The tabs below show the full files for each pipeline type so you can swap, copy, or adapt them.
from dotenv import load_dotenvfrom livekit import agents, rtcfrom livekit.agents import AgentServer, AgentSession, Agent, room_io, TurnHandlingOptionsfrom livekit.plugins import noise_cancellation, silerofrom livekit.plugins.turn_detector.multilingual import MultilingualModelload_dotenv(".env.local")class Assistant(Agent):def __init__(self) -> None:super().__init__(instructions="""You are a helpful voice AI assistant.You eagerly assist users with their questions by providing information from your extensive knowledge.Your responses are concise, to the point, and without any complex formatting or punctuation including emojis, asterisks, or other symbols.You are curious, friendly, and have a sense of humor.""",)server = AgentServer()@server.rtc_session(agent_name="my-agent")async def my_agent(ctx: agents.JobContext):session = AgentSession(stt="deepgram/nova-3:multi",llm="openai/gpt-4.1-mini",tts="cartesia/sonic-3:9626c31c-bec5-4cca-baa8-f8ba9e84c8bc",vad=silero.VAD.load(),turn_handling=TurnHandlingOptions(turn_detection=MultilingualModel(),),)await session.start(room=ctx.room,agent=Assistant(),room_options=room_io.RoomOptions(audio_input=room_io.AudioInputOptions(noise_cancellation=lambda params: noise_cancellation.BVCTelephony() if params.participant.kind == rtc.ParticipantKind.PARTICIPANT_KIND_SIP else noise_cancellation.BVC(),),),)await session.generate_reply(instructions="Greet the user and offer your assistance.")if __name__ == "__main__":agents.cli.run_app(server)
from dotenv import load_dotenvfrom livekit import agents, rtcfrom livekit.agents import AgentServer, AgentSession, Agent, room_iofrom livekit.plugins import (openai,noise_cancellation,)load_dotenv(".env.local")class Assistant(Agent):def __init__(self) -> None:super().__init__(instructions="You are a helpful voice AI assistant.")server = AgentServer()@server.rtc_session(agent_name="my-agent")async def my_agent(ctx: agents.JobContext):session = AgentSession(llm=openai.realtime.RealtimeModel(voice="coral"))await session.start(room=ctx.room,agent=Assistant(),room_options=room_io.RoomOptions(audio_input=room_io.AudioInputOptions(noise_cancellation=lambda params: noise_cancellation.BVCTelephony() if params.participant.kind == rtc.ParticipantKind.PARTICIPANT_KIND_SIP else noise_cancellation.BVC(),),),)await session.generate_reply(instructions="Greet the user and offer your assistance. You should start by speaking in English.")if __name__ == "__main__":agents.cli.run_app(server)
Next steps
Follow these guides to bring your voice AI app to life in the real world.
Web and mobile frontends
Put your agent in your pocket with a custom web or mobile app.
Telephony integration
Your agent can place and receive calls with LiveKit's SIP integration.
Testing your agent
Add behavioral tests to fine-tune your agent's behavior.
Building voice agents
Comprehensive documentation to build advanced voice AI apps with LiveKit.
Agent server
Learn how to manage your agents with agent servers and jobs.
Deploying to LiveKit Cloud
Learn more about deploying and scaling your agent in production.
AI Models
Explore the full list of AI models available with LiveKit Agents.
Recipes
A comprehensive collection of examples, guides, and recipes for LiveKit Agents.