Skip to main content

Chat context

How to use ChatContext to manage conversation history in your agents.

Overview

ChatContext is the conversation history sent to the LLM on each turn. It holds an ordered list of items—messages and events like agent handoffs—that together define what the model knows about the current conversation.

Each agent and task maintains its own chat_ctx. By default, a new agent or task starts with an empty context. You can initialize it at construction time, modify it during turns, or pass it across handoffs.

Accessing the context

Within an agent or task, the current context is available as self.chat_ctx:

class MyAgent(Agent):
async def on_enter(self) -> None:
print(self.chat_ctx.items)
class MyAgent extends voice.Agent {
async onEnter(): Promise<void> {
console.log(this.chatCtx.items);
}
}

The complete conversation history across all agents in a session is available on session.history:

history = self.session.history
const history = this.session.history;

Structure

ChatContext exposes an items list. Each item has a type field that determines what it represents:

TypeDescription
messageA conversation turn with a role (system, user, or assistant) and content (text, images, or instructions).
function_callA tool invocation requested by the LLM.
function_call_outputThe result returned from a tool call.
agent_handoffAdded automatically when control transfers between agents.
agent_config_updateRecords a change to the agent's instructions or tools. Only available in Python.

To get the text of a message type item, use text_content (Python) or textContent (Node.js). This property is only available on ChatMessage items.

Core operations

These are the most commonly used ChatContext operations. For additional methods like insert() and get_by_id(), see the reference for Python and Node.js.

Creating a context

Create a ChatContext and add messages directly:

from livekit.agents import ChatContext
chat_ctx = ChatContext()
chat_ctx.add_message(role="system", content="You are a helpful assistant.")
chat_ctx.add_message(role="user", content="Hello!")
import { llm } from '@livekit/agents';
const chatCtx = new llm.ChatContext();
chatCtx.addMessage({ role: 'system', content: 'You are a helpful assistant.' });
chatCtx.addMessage({ role: 'user', content: 'Hello!' });

Copying a context

Use copy() to create a snapshot that can be passed to another agent or modified independently. By default, copy() includes all items — messages, function calls, handoff markers, and system (instruction) messages.

You can filter the copy with the following options:

OptionDescription
exclude_instructionsOmit system/developer messages.
exclude_function_callOmit function calls and their outputs.
exclude_handoffOmit agent handoff markers.
exclude_empty_messageOmit messages with no content.
exclude_config_updateOmit agent config update items.
# Copy everything
full_copy = self.chat_ctx.copy()
# Copy only user/assistant turns, without tool calls
turns_only = self.chat_ctx.copy(exclude_instructions=True, exclude_function_call=True)
// Copy everything
const fullCopy = this.chatCtx.copy();
// Copy only user/assistant turns, without tool calls
const turnsOnly = this.chatCtx.copy({ excludeInstructions: true, excludeFunctionCall: true });

Truncating a context

truncate() reduces a context to the most recent n items. It always preserves system instructions even if they fall outside the item window, and strips any leading function call items to avoid orphaned tool results. This is useful when you want to pass only the tail of a long conversation to the next agent:

recent = self.chat_ctx.copy().truncate(max_items=6)
const recent = this.chatCtx.copy().truncate(6);

Merging contexts

merge() combines items from another context into the current one, deduplicating by item ID and maintaining chronological order. This is useful after parallel tasks when you need to reunify their conversation histories:

primary_ctx.merge(other_ctx)
# Merge without carrying over tool calls
primary_ctx.merge(other_ctx, exclude_function_call=True)
primaryCtx.merge(otherCtx);
// Merge without carrying over tool calls
primaryCtx.merge(otherCtx, { excludeFunctionCall: true });

Common patterns

These examples show how to use ChatContext in typical agent workflows. Each pattern includes both Python and Node.js examples.

Initialize with user data

Load user-specific context before the session starts and pass it to the agent constructor. This is the recommended approach for personalizing the agent without a round-trip to the LLM:

initial_ctx = ChatContext()
initial_ctx.add_message(role="assistant", content=f"The user's name is {user_name}.")
await session.start(
room=ctx.room,
agent=MyAgent(chat_ctx=initial_ctx),
)
const initialCtx = new llm.ChatContext();
initialCtx.addMessage({ role: 'assistant', content: `The user's name is ${userName}.` });
await session.start({
room: ctx.room,
agent: new MyAgent({ chatCtx: initialCtx }),
});

For a complete example, see External data and RAG.

Modifying context during a turn

Override the on_user_turn_completed node to inject additional context before the LLM generates its reply. Messages added here apply to the current turn only. Call update_chat_ctx to persist them:

from livekit.agents import ChatContext, ChatMessage
async def on_user_turn_completed(
self, turn_ctx: ChatContext, new_message: ChatMessage,
) -> None:
# your function that retrieves context from a database, API, or other source
extra = await fetch_relevant_data(new_message.text_content)
turn_ctx.add_message(role="assistant", content=extra)
await self.update_chat_ctx(turn_ctx) # persist beyond this turn
import { llm } from '@livekit/agents';
async onUserTurnCompleted(
chatCtx: llm.ChatContext,
newMessage: llm.ChatMessage,
): Promise<void> {
// your function that retrieves context from a database, API, or other source
const extra = await fetchRelevantData(newMessage.textContent);
chatCtx.addMessage({ role: 'assistant', content: extra });
await this.updateChatCtx(chatCtx); // persist beyond this turn
}

For more details on pipeline nodes, see Pipeline nodes & hooks.

Passing context during handoffs

Pass the current context to the next agent to preserve conversation history across handoffs. Use exclude_instructions=True to avoid forwarding the previous agent's system prompt:

return NextAgent(chat_ctx=self.chat_ctx.copy(exclude_instructions=True))
return llm.handoff({
agent: new NextAgent({ chatCtx: this.chatCtx.copy({ excludeInstructions: true }) }),
});

For long conversations, summarize the context before passing it to reduce token cost. See Summarizing context for a complete example.

Adding images and video frames

Message content can include images alongside text. Pass a list of text and ImageContent items to add_message:

from livekit.agents import ChatContext
from livekit.agents.llm import ImageContent
initial_ctx = ChatContext()
initial_ctx.add_message(
role="user",
content=[
"Here is a picture of me",
ImageContent(image="https://example.com/image.jpg"),
],
)
import { llm } from '@livekit/agents';
const initialCtx = new llm.ChatContext();
initialCtx.addMessage({
role: 'user',
content: [
'Here is a picture of me',
llm.createImageContent({ image: 'https://example.com/image.jpg' }),
],
});

You can also inject live video frames into the context during a conversation turn. For details, see Images and Video.

Custom context for generate_reply()

Pass a modified ChatContext to generate_reply() to fully control the context for a single reply. This replaces the agent's session-level context for that reply only, which is useful when you need to exclude certain messages, inject one-off context, or override instructions:

# Copy and modify the current context for this reply only
ctx = session.current_agent.chat_ctx.copy()
# Modify as needed: trim history, inject context, replace instructions, etc.
await session.generate_reply(chat_ctx=ctx)
// Copy and modify the current context for this reply only
const ctx = session.currentAgent.chatCtx.copy();
// Modify as needed: trim history, inject context, replace instructions, etc.
await session.generateReply({ chatCtx: ctx });

For the full list of generate_reply() parameters, see Speech & audio.

Standalone LLM usage

ChatContext also works outside of agents and sessions. Pass it directly to an LLM's chat() method for background tasks, preprocessing, or any workflow that needs LLM output without the voice pipeline.

For more details, see Standalone LLM usage.

Additional resources