Skip to main content

Letta LLM integration guide

How to use a Letta agent for your LLM with LiveKit Agents.

Overview

Letta enables you to build and deploy stateful AI agents that maintain memory and context across long-running conversations. You can build sophisticated voice AI applications using a Letta agent as the LLM in your STT-LLM-TTS pipeline.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the OpenAI plugin to add Letta support:

pip install "livekit-agents[openai]~=1.0"

Authentication

If your Letta server requires authentication, you need to provide an API key. Set the following environment variable in your .env file:

LETTA_API_KEY

Usage

Use Letta LLM within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.

from livekit.plugins import openai
session = AgentSession(
llm=openai.LLM.with_letta(
agent_id="<agent-id>",
),
# ... tts, stt, vad, turn_detection, etc.
)

Parameters

This section describes some of the parameters for the with_letta method. For a complete list of all available parameters, see the plugin documentation.

agent_idstringRequired

Letta agent ID. Must begin with agent-.

base_urlstringOptionalDefault: https://api.letta.com/v1/voice-beta

URL of the Letta server. For example, your self-hosted server or Letta Cloud.

Additional resources

The following links provide more information about the Letta LLM plugin.