Overview
LiveKit Agents has full support for LLM tool use. This feature allows you to create a custom library of tools to extend your agent's context, create interactive experiences, and overcome LLM limitations. Within a tool, you can:
- Generate agent speech with
session.say()
orsession.generate_reply()
. - Call methods on the frontend using RPC.
- Handoff control to another agent as part of a workflow.
- Store and retrieve session data from the
context
. - Call external APIs or lookup data for RAG.
- Anything else that a Python function can do.
Tool definition
Add tools to your agent class with the @function_tool
decorator. The LLM has access to them automatically.
from livekit.agents.llm import function_toolfrom livekit.agents.voice import Agentfrom livekit.agents.events import RunContextclass MyAgent(Agent):@function_tool()async def lookup_weather(self,context: RunContext,location: str,) -> dict[str, Any]:"""Look up weather information for a given location."""return {"weather": "sunny", "temperature_f": 70}
A good tool definition is key to reliable tool use from your LLM. Be specific about what the tool does, when it should or should not be used, what the arguments are for, and what type of return value to expect.
Name and description
By default, the tool name is the name of the function, and the description is its docstring. Override this behavior with the name
and description
arguments to the @function_tool
decorator.
Arguments and return value
The tool arguments are copied automatically by name from the function arguments. Type hints for arguments and return value are included, if present.
Place additional information about the tool arguments and return value, if needed, in the tool description.
RunContext
Tools include support for a special context
argument. This contains access to the current session
, function_call
, speech_handle
, and userdata
. Consult the documentation on speech and state within workflows for more information about how to use these features.
Error handling
Raise the ToolError
exception to return an error to the LLM in place of a response. You may include a custom message to describe the error and/or recovery options.
@function_tool()async def lookup_weather(self,context: RunContext,location: str,) -> dict[str, Any]:if location == "mars":raise ToolError("This location is coming soon. Please join our mailing list to stay updated.")else:return {"weather": "sunny", "temperature_f": 70}
Dynamic and shared tools
You can exercise more control over the tools available by setting the tools
argument directly.
To share a tool between multiple agents, define it outside of their class and then provide it to each. The RunContext
is especially useful for this purpose to access the current session, agent, and state.
from livekit.agents.llm import function_toolfrom livekit.agents.voice import Agentfrom livekit.agents.events import RunContext@function_tool()async def lookup_user(context: RunContext,user_id: str,) -> dict:"""Look up a user's information by ID."""return {"name": "John Doe", "email": "john.doe@example.com"}class AgentA(Agent):def __init__(self):super().__init__(tools=[lookup_user],# ...)class AgentB(Agent):def __init__(self):super().__init__(tools=[lookup_user],# ...)
Use agent.update_tools()
to update available tools after creating an agent. Note that this replaces all tools, including those registered automatically within the agent class.
# add a toolagent.update_tools(agent.tools + [tool_a])# remove a toolagent.update_tools(agent.tools - [tool_a])# replace all toolsagent.update_tools([tool_a, tool_b])
Forwarding to the frontend
Forward tool calls to a frontend app using RPC. This is useful when the data needed to fulfill the function call is only available at the frontend. You may also use RPC to trigger actions or UI updates in a structured way.
For instance, here's a function that accesses the user's live location from their web browser:
Agent implementation
from livekit.agents.llm import function_toolfrom livekit.agents.voice.events import RunContextfrom livekit.agents.job import get_current_job_context@function_tool()async def get_user_location(context: RunContext,high_accuracy: bool):"""Retrieve the user's current geolocation as lat/lng.Args:high_accuracy: Whether to use high accuracy mode, which is slower but more preciseReturns:A dictionary containing latitude and longitude coordinates"""try:participant_identity = next(iter(get_current_job_context().room.remote_participants))response = await context.session.room.local_participant.perform_rpc(destination_identity=participant_identity,method="getUserLocation",payload=json.dumps({"highAccuracy": high_accuracy}),response_timeout=10.0 if high_accuracy else 5.0,)return responseexcept Exception:raise ToolError("Unable to retrieve user location")
Frontend implementation
The following example uses the JavaScript SDK. The same pattern works for other SDKs. For more examples, see the RPC documentation.
import { RpcError, RpcInvocationData } from 'livekit-client';localParticipant.registerRpcMethod('getUserLocation',async (data: RpcInvocationData) => {try {let params = JSON.parse(data.payload);const position: GeolocationPosition = await new Promise((resolve, reject) => {navigator.geolocation.getCurrentPosition(resolve, reject, {enableHighAccuracy: params.highAccuracy ?? false,timeout: data.responseTimeout,});});return JSON.stringify({latitude: position.coords.latitude,longitude: position.coords.longitude,});} catch (error) {throw new RpcError(1, "Could not retrieve user location");}});
Examples
Dynamic tool creation
Complete example with dynamic tool lists.
Parallel tool calling
Example showing how to call tools in parallel.