Skip to main content

Async tools

Handle long-running tools so agents can keep talking.

ONLY Available in
Python

Overview

Regular tools block the conversation until they return. The agent waits for all pending tools to complete before generating its next turn. Because the LLM doesn't have the tool call in its chat context while a regular tool runs, it can't see that one is already in flight (and may call it again), can't cancel a running tool, and can't send progress updates from inside one.

Async tools run in the background so the conversation continues. The agent is notified asynchronously when the tool returns, and you can push progress updates while the tool works. Use this for tools that take more than a few seconds, such as booking a flight, running a web search, or processing a document.

Create async tools inside of an AsyncToolset, using an AsyncRunContext in place of the standard context. Use the context's update function to provide progress updates while the tool works. Return the final result when the tool is complete.

The agent waits for the first update from each async tool before resuming normal conversation. Additional updates are added to its context when it is idle, and a new turn is generated. This ensures the agent is conversational and engaging while the tool runs.

Async tool agent

Full example of a travel assistant that books flights with progress updates and searches destinations in the background.

Basic usage

Subclass AsyncToolset and define tools with @function_tool. Use AsyncRunContext instead of RunContext to opt into background execution:

from livekit.agents import Agent, function_tool
from livekit.agents.llm.async_toolset import AsyncRunContext, AsyncToolset
class BookingToolset(AsyncToolset):
def __init__(self):
super().__init__(id="booking")
@function_tool()
async def book_flight(
self, ctx: AsyncRunContext, origin: str, destination: str, date: str
) -> str:
"""Book a flight for the user.
Args:
origin: Departure city or airport code.
destination: Arrival city or airport code.
date: Travel date (YYYY-MM-DD).
"""
await ctx.update(f"Searching flights from {origin} to {destination} on {date}.")
# → agent says: "Sure, let me look up flights from New York to Tokyo on April 15th."
flights = await search_flights(origin, destination, date)
await ctx.update(f"Found {len(flights)} options. Booking the best one now.")
# → agent says: "I found 3 options. Booking the best one for you now."
booking = await confirm_booking(flights[0])
return f"Booked! Confirmation number: {booking.id}"
# → agent says: "All set. Your booking confirmation number is FL-847293."
class TravelAgent(Agent):
def __init__(self):
super().__init__(
instructions="You are a travel assistant.",
tools=[BookingToolset()],
)

Duplicate call handling

When the LLM calls a tool that's already running, AsyncToolset handles the duplicate based on the on_duplicate_call parameter. Duplicates are detected by tool name only, not by arguments.

ModeDescription
"confirm"Default. Sends the name and arguments of the running call back to the LLM and asks it to re-call with confirmation if a duplicate is needed.
"allow"Runs the duplicate without restriction.
"replace"Cancels the existing call and starts a new one.
"reject"Rejects the duplicate and tells the LLM to cancel via cancel_task instead.
class MyToolset(AsyncToolset):
def __init__(self):
super().__init__(id="my_tools", on_duplicate_call="reject")

Built-in management tools

AsyncToolset automatically adds two tools that let the LLM manage running background tasks:

  • get_running_tasks(): Returns a list of currently running async tool calls.
  • cancel_task(call_id): Cancels a running async tool call by its call ID.

These tools are available to the LLM alongside your custom tools. You don't need to define them yourself.

Note

By default, the LLM can cancel any running async tool via cancel_task, which raises asyncio.CancelledError inside the tool. Structure your logic to handle partial completion safely, or call ctx.disallow_interruptions() at the start of the tool to make it non-cancellable. When interruptions are disallowed, the LLM gets a ToolError instead of cancelling the task.

Agent handoffs

During an agent handoff, async toolsets owned at the session level are persisted while those owned at the agent level are cancelled.

You can pass an AsyncToolset to either AgentSession or Agent. The example in Basic usage passes it to the Agent, which means any running tasks are cancelled if the agent hands off. To keep async tools running across handoffs, pass the toolset to AgentSession instead:

session = AgentSession(
# ... stt, llm, tts, etc.
tools=[BookingToolset()],
)

Additional resources

For more information on concepts covered in this topic, see the following related topics: