Overview
ChatContext is the conversation history sent to the LLM on each turn. It holds an ordered list of items—messages and events like agent handoffs—that together define what the model knows about the current conversation.
Each agent and task maintains its own chat_ctx. By default, a new agent or task starts with an empty context. You can initialize it at construction time, modify it during turns, or pass it across handoffs.
Accessing the context
Within an agent or task, the current context is available as self.chat_ctx:
class MyAgent(Agent):async def on_enter(self) -> None:print(self.chat_ctx.items)
class MyAgent extends voice.Agent {async onEnter(): Promise<void> {console.log(this.chatCtx.items);}}
The complete conversation history across all agents in a session is available on session.history:
history = self.session.history
const history = this.session.history;
Structure
ChatContext exposes an items list. Each item has a type field that determines what it represents:
| Type | Description |
|---|---|
message | A conversation turn with a role (system, user, or assistant) and content (text, images, or instructions). |
function_call | A tool invocation requested by the LLM. |
function_call_output | The result returned from a tool call. |
agent_handoff | Added automatically when control transfers between agents. |
agent_config_update | Records a change to the agent's instructions or tools. Only available in Python. |
To get the text of a message type item, use text_content (Python) or textContent (Node.js). This property is only available on ChatMessage items.
Core operations
These are the most commonly used ChatContext operations. For additional methods like insert() and get_by_id(), see the reference for Python and Node.js.
Creating a context
Create a ChatContext and add messages directly:
from livekit.agents import ChatContextchat_ctx = ChatContext()chat_ctx.add_message(role="system", content="You are a helpful assistant.")chat_ctx.add_message(role="user", content="Hello!")
import { llm } from '@livekit/agents';const chatCtx = new llm.ChatContext();chatCtx.addMessage({ role: 'system', content: 'You are a helpful assistant.' });chatCtx.addMessage({ role: 'user', content: 'Hello!' });
Copying a context
Use copy() to create a snapshot that can be passed to another agent or modified independently. By default, copy() includes all items — messages, function calls, handoff markers, and system (instruction) messages.
You can filter the copy with the following options:
| Option | Description |
|---|---|
exclude_instructions | Omit system/developer messages. |
exclude_function_call | Omit function calls and their outputs. |
exclude_handoff | Omit agent handoff markers. |
exclude_empty_message | Omit messages with no content. |
exclude_config_update | Omit agent config update items. |
# Copy everythingfull_copy = self.chat_ctx.copy()# Copy only user/assistant turns, without tool callsturns_only = self.chat_ctx.copy(exclude_instructions=True, exclude_function_call=True)
// Copy everythingconst fullCopy = this.chatCtx.copy();// Copy only user/assistant turns, without tool callsconst turnsOnly = this.chatCtx.copy({ excludeInstructions: true, excludeFunctionCall: true });
Truncating a context
truncate() reduces a context to the most recent n items. It always preserves system instructions even if they fall outside the item window, and strips any leading function call items to avoid orphaned tool results. This is useful when you want to pass only the tail of a long conversation to the next agent:
recent = self.chat_ctx.copy().truncate(max_items=6)
const recent = this.chatCtx.copy().truncate(6);
Merging contexts
merge() combines items from another context into the current one, deduplicating by item ID and maintaining chronological order. This is useful after parallel tasks when you need to reunify their conversation histories:
primary_ctx.merge(other_ctx)# Merge without carrying over tool callsprimary_ctx.merge(other_ctx, exclude_function_call=True)
primaryCtx.merge(otherCtx);// Merge without carrying over tool callsprimaryCtx.merge(otherCtx, { excludeFunctionCall: true });
Common patterns
These examples show how to use ChatContext in typical agent workflows. Each pattern includes both Python and Node.js examples.
Initialize with user data
Load user-specific context before the session starts and pass it to the agent constructor. This is the recommended approach for personalizing the agent without a round-trip to the LLM:
initial_ctx = ChatContext()initial_ctx.add_message(role="assistant", content=f"The user's name is {user_name}.")await session.start(room=ctx.room,agent=MyAgent(chat_ctx=initial_ctx),)
const initialCtx = new llm.ChatContext();initialCtx.addMessage({ role: 'assistant', content: `The user's name is ${userName}.` });await session.start({room: ctx.room,agent: new MyAgent({ chatCtx: initialCtx }),});
For a complete example, see External data and RAG.
Modifying context during a turn
Override the on_user_turn_completed node to inject additional context before the LLM generates its reply. Messages added here apply to the current turn only. Call update_chat_ctx to persist them:
from livekit.agents import ChatContext, ChatMessageasync def on_user_turn_completed(self, turn_ctx: ChatContext, new_message: ChatMessage,) -> None:# your function that retrieves context from a database, API, or other sourceextra = await fetch_relevant_data(new_message.text_content)turn_ctx.add_message(role="assistant", content=extra)await self.update_chat_ctx(turn_ctx) # persist beyond this turn
import { llm } from '@livekit/agents';async onUserTurnCompleted(chatCtx: llm.ChatContext,newMessage: llm.ChatMessage,): Promise<void> {// your function that retrieves context from a database, API, or other sourceconst extra = await fetchRelevantData(newMessage.textContent);chatCtx.addMessage({ role: 'assistant', content: extra });await this.updateChatCtx(chatCtx); // persist beyond this turn}
For more details on pipeline nodes, see Pipeline nodes & hooks.
Passing context during handoffs
Pass the current context to the next agent to preserve conversation history across handoffs. Use exclude_instructions=True to avoid forwarding the previous agent's system prompt:
return NextAgent(chat_ctx=self.chat_ctx.copy(exclude_instructions=True))
return llm.handoff({agent: new NextAgent({ chatCtx: this.chatCtx.copy({ excludeInstructions: true }) }),});
For long conversations, summarize the context before passing it to reduce token cost. See Summarizing context for a complete example.
Adding images and video frames
Message content can include images alongside text. Pass a list of text and ImageContent items to add_message:
from livekit.agents import ChatContextfrom livekit.agents.llm import ImageContentinitial_ctx = ChatContext()initial_ctx.add_message(role="user",content=["Here is a picture of me",ImageContent(image="https://example.com/image.jpg"),],)
import { llm } from '@livekit/agents';const initialCtx = new llm.ChatContext();initialCtx.addMessage({role: 'user',content: ['Here is a picture of me',llm.createImageContent({ image: 'https://example.com/image.jpg' }),],});
You can also inject live video frames into the context during a conversation turn. For details, see Images and Video.
Custom context for generate_reply()
Pass a modified ChatContext to generate_reply() to fully control the context for a single reply. This replaces the agent's session-level context for that reply only, which is useful when you need to exclude certain messages, inject one-off context, or override instructions:
# Copy and modify the current context for this reply onlyctx = session.current_agent.chat_ctx.copy()# Modify as needed: trim history, inject context, replace instructions, etc.await session.generate_reply(chat_ctx=ctx)
// Copy and modify the current context for this reply onlyconst ctx = session.currentAgent.chatCtx.copy();// Modify as needed: trim history, inject context, replace instructions, etc.await session.generateReply({ chatCtx: ctx });
For the full list of generate_reply() parameters, see Speech & audio.
Standalone LLM usage
ChatContext also works outside of agents and sessions. Pass it directly to an LLM's chat() method for background tasks, preprocessing, or any workflow that needs LLM output without the voice pipeline.
For more details, see Standalone LLM usage.
Additional resources
Agents & handoffs
How to pass and summarize context across agent handoffs.
External data & RAG
Load external data into the chat context at session start or during turns.
Pipeline nodes & hooks
Modify the chat context at specific points in the voice pipeline.
Images & video
Add images and video frames to the chat context.
Speech & audio
Use a custom chat context with generate_reply().
LLM overview
Use ChatContext with standalone LLM calls outside of agents.