Skip to main content

Google Gemini LLM plugin guide

A guide to using Google Gemini with LiveKit Agents.

Available in
Python
|
Node.js

Overview

This plugin allows you to use Google Gemini as an LLM provider for your voice agents.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the plugin from PyPI:

uv add "livekit-agents[google]~=1.3"
pnpm add @livekit/agents-plugin-google@1.x

Authentication

The Google plugin requires authentication based on your chosen service:

  • For Vertex AI, you must set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the service account key file. For more information about mounting files as secrets when deploying to LiveKit Cloud, see File-mounted secrets .
  • For Google Gemini API, set the GOOGLE_API_KEY environment variable.

Usage

Use Gemini within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.

from livekit.plugins import google
session = AgentSession(
llm=google.LLM(
model="gemini-3-flash-preview",
),
# ... tts, stt, vad, turn_detection, etc.
)
import * as google from '@livekit/agents-plugin-google';
const session = new voice.AgentSession({
llm: google.LLM(
model: "gemini-3-flash-preview",
),
// ... tts, stt, vad, turn_detection, etc.
});

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.

modelChatModels | strOptionalDefault: gemini-3-flash-preview

ID of the model to use. For a full list, see Gemini models.

api_keystrOptionalEnv: GOOGLE_API_KEY

API key for Google Gemini API.

vertexaiboolOptionalDefault: false

True to use Vertex AI; false to use Google AI.

projectstrOptionalEnv: GOOGLE_CLOUD_PROJECT

Google Cloud project to use (only if using Vertex AI). Required if using Vertex AI and the environment variable isn't set.

locationstrOptionalEnv: GOOGLE_CLOUD_LOCATION

Google Cloud location to use (only if using Vertex AI). Required if using Vertex AI and the environment variable isn't set.

Provider tools

Google Gemini supports provider tools that enable the model to use built-in capabilities executed on the model server. These tools can be used alongside function tools defined in your agent's codebase.

Available tools include:

  • GoogleSearch: Perform keyword search, semantic search, user search, and thread fetch on Google
  • GoogleMaps: Perform search for places and businesses using Google Maps
  • URLContext: Provide context for URLs
  • ToolCodeExecution: Execute code snippets
Current limitations

Currently only the Gemini Live API supports using provider tools along with function tools.

When using text models, only provider tools or function tools can be used. See issue #53 for more details.

from livekit.plugins import google
from google.genai import types
agent = MyAgent(
llm=google.LLM(
model="gemini-2.5-flash",
),
tools=[google.tools.GoogleSearch()],
)
import * as google from '@livekit/agents-plugin-google';
// currently, Agents JS supports provider tools via `geminiTools` parameter.
const agent = new MyAgent({
llm: google.LLM(
model: "gemini-2.5-flash",
geminiTools: [new google.types.GoogleSearch()],
),
// ... tts, stt, vad, turn_detection, etc.
});

Additional resources

The following resources provide more information about using Google Gemini with LiveKit Agents.