Skip to main content

Gemini Live API integration guide

How to use the Gemini Live API with LiveKit Agents.

Available in
Python
|
Node.js
Try the playground

Chat with a voice assistant built with LiveKit and the Gemini Live API

Try the playground

Overview

Google's Gemini Live API enables low-latency, two-way interactions that use text, audio, and video input, with audio and text output. LiveKit's Google plugin includes a RealtimeModel class that allows you to use this API to create agents with natural, human-like voice conversations.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the Google plugin:

pip install "livekit-agents[google]~=1.2"

Authentication

The Google plugin requires authentication based on your chosen service:

  • For Vertex AI, you must set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the service account key file.
  • For Google Gemini API, set the GOOGLE_API_KEY environment variable.

Usage

Use the Gemini Live API within an AgentSession. For example, you can use it in the Voice AI quickstart.

from livekit.plugins import google
session = AgentSession(
llm=google.beta.realtime.RealtimeModel(
model="gemini-2.0-flash-exp",
voice="Puck",
temperature=0.8,
instructions="You are a helpful assistant",
),
)
Limitations with Gemini 2.5

Gemini 2.5 Live is currently in preview and does not handle function calling correctly. The Gemini team is actively working on a fix. See GH issue.

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.

instructionsstringOptional

System instructions to better control the model's output and specify tone and sentiment of responses. To learn more, see System instructions.

modelLiveAPIModels | stringRequiredDefault: gemini-2.0-flash-exp

Live API model to use.

api_keystringRequiredEnv: GOOGLE_API_KEY

Google Gemini API key.

voiceVoice | stringRequiredDefault: Puck

Name of the Gemini Live API voice. For a full list, see Voices.

modalitieslist[Modality]OptionalDefault: ["AUDIO"]

List of response modalities to use. Set to ["TEXT"] to use the model in text-only mode with a separate TTS plugin.

vertexaibooleanRequiredDefault: false

If set to true, use Vertex AI.

projectstringOptionalEnv: GOOGLE_CLOUD_PROJECT

Google Cloud project ID to use for the API (if vertextai=True). By default, it uses the project in the service account key file (set using the GOOGLE_APPLICATION_CREDENTIALS environment variable).

locationstringOptionalEnv: GOOGLE_CLOUD_LOCATION

Google Cloud location to use for the API (if vertextai=True). By default, it uses the location from the service account key file or us-central1.

_gemini_toolslist[GeminiTool]Optional

List of built-in Google tools, such as Google Search. For more information, see Gemini tools.

Gemini tools

Experimental feature

This integration is experimental and may change in a future SDK release.

The _gemini_tools parameter allows you to use built-in Google tools with the Gemini model. For example, you can use this feature to implement Grounding with Google Search:

from google.genai import types
session = AgentSession(
llm=google.beta.realtime.RealtimeModel(
model="gemini-2.0-flash-exp",
_gemini_tools=[types.GoogleSearch()],
)
)

Turn detection

The Gemini Live API includes built-in VAD-based turn detection, enabled by default. To use LiveKit’s turn detection model instead, configure the model to disable automatic activity detection. A separate streaming STT model is required in order to use LiveKit’s turn detection model.

from google.genai import types
from livekit.agents import AgentSession
from livekit.plugins.turn_detector.multilingual import MultilingualModel
session = AgentSession(
turn_detection=MultilingualModel(),
llm=google.beta.realtime.RealtimeModel(
realtime_input_config=types.RealtimeInputConfig(
automatic_activity_detection=types.AutomaticActivityDetection(
disabled=True,
),
),
input_audio_transcription=None,
stt=deepgram.STT(),
)

Usage with separate TTS

To use the Gemini Live API with a different TTS provider, configure it with a text-only response modality and include a TTS plugin in your AgentSession configuration. This configuration allows you to gain the benefits of realtime speech comprehension while maintaining complete control over the speech output.

from google.genai.types import Modality
session = AgentSession(
llm=google.beta.realtime.RealtimeModel(modalities=[Modality.TEXT]),
tts=cartesia.TTS(),
)

Additional resources

The following resources provide more information about using Gemini with LiveKit Agents.