Overview
LiveKit Agents has full support for LLM tool use. This feature allows you to create a custom library of tools to extend your agent's context, create interactive experiences, and overcome LLM limitations. Tools can run synchronously or in the background, letting the agent keep talking while long-running work completes.
Within a tool, you can:
- Generate agent speech with
session.say()orsession.generate_reply(). - Call methods on the frontend using RPC.
- Handoff control to another agent as part of a workflow.
- Store and retrieve session data from the
context. - Anything else that a Python function can do.
- Call external APIs or lookup data for RAG.
Tool types
Two types of tools are supported:
- Function tools: Tools that are defined as functions within your agent's code base and can be called by the LLM.
- Provider tools: Tools provided by a specific model provider (e.g. OpenAI, Gemini, etc.) and are executed internally by the provider's model server.
Provider tools
Many LLM providers, including OpenAI, Gemini, and xAI, include built-in server-side tools that are executed entirely within a single API call. Examples include web search, code execution, and file search. These tools, called "provider tools" in LiveKit Agents, can be added to any agent that uses a supported LLM. You can mix and match provider tools with function tools by passing them to the tools parameter on your Agent.
from livekit.plugins import openai # replace with any supported provideragent = MyAgent(llm=openai.responses.LLM(model="gpt-4.1"),tools=[openai.tools.WebSearch()], # replace with any supported tool)
Refer to the documentation for each model provider for usage details.
| Provider | Supported tools |
|---|---|
| Anthropic | ComputerUse |
| Gemini | GoogleSearch, GoogleMaps, URLContext, FileSearch, ToolCodeExecution |
| Mistral AI | WebSearch, DocumentLibrary, CodeInterpreter |
| OpenAI | WebSearch, FileSearch, CodeInterpreter |
| xAI | WebSearch, XSearch, FileSearch |
Examples
The following additional examples show how to use tools in different ways:
Use of enum
Example showing how to annotate arguments with enum.
Dynamic tool creation
Complete example with dynamic tool lists.
MCP Agent
In this section
Read more about each topic.
| Topic | Description |
|---|---|
| Function tools | Define function tools with decorators, RunContext, speech in tools, interruptions, dynamic tools, toolsets, and error handling. |
| Async tools | Run long-running tools in the background so the agent can keep talking (Python only). |
| Model Context Protocol (MCP) | Expose tools from MCP servers to your agent (Python only). |
| Forwarding to the frontend | Fulfill tool calls via RPC from the client. |
Additional resources
The following articles provide more information about the topics discussed in this guide: