Create a new agent in your browser using this model
Overview
Google Gemini models are available in LiveKit Agents through LiveKit Inference and the Gemini plugin. Pricing for LiveKit Inference is available on the pricing page.
| Model name | Model ID | Providers |
|---|---|---|
| Gemini 3 Pro | google/gemini-3-pro | google |
| Gemini 3 Flash | google/gemini-3-flash | google |
| Gemini 2.5 Pro | google/gemini-2.5-pro | google |
| Gemini 2.5 Flash | google/gemini-2.5-flash | google |
| Gemini 2.5 Flash Lite | google/gemini-2.5-flash-lite | google |
LiveKit Inference
Use LiveKit Inference to access Gemini models without a separate Google API key.
Usage
To use Gemini, use the LLM class from the inference module:
from livekit.agents import AgentSession, inferencesession = AgentSession(llm=inference.LLM(model="google/gemini-2.5-flash-lite",extra_kwargs={"max_completion_tokens": 1000}),# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession, inference } from '@livekit/agents';session = new AgentSession({llm: new inference.LLM({model: "google/gemini-2.5-flash-lite",modelOptions: {max_completion_tokens: 1000}}),// ... tts, stt, vad, turn_detection, etc.});
Parameters
stringRequiredThe model ID from the models list.
stringOptionalSet a specific provider to use for the LLM. Refer to the models list for available providers. If not set, LiveKit Inference uses the best available provider, and bills accordingly.
dictOptionalAdditional parameters to pass to the Gemini Chat Completions API, such as max_completion_tokens.
In Node.js this parameter is called modelOptions.
String descriptors
As a shortcut, you can also pass a model descriptor string directly to the llm argument in your AgentSession:
from livekit.agents import AgentSessionsession = AgentSession(llm="google/gemini-2.5-flash-lite",# ... tts, stt, vad, turn_detection, etc.)
import { AgentSession } from '@livekit/agents';session = new AgentSession({llm: "google/gemini-2.5-flash-lite",// ... tts, stt, vad, turn_detection, etc.});
Plugin
Use the Google plugin to connect directly to Google's Gemini API with your own API key.
Installation
Install the plugin from PyPI:
uv add "livekit-agents[google]~=1.4"
pnpm add @livekit/agents-plugin-google@1.x
Authentication
The Google plugin requires authentication based on your chosen service:
- For Vertex AI, you must set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the path of the service account key file. For more information about mounting files as secrets when deploying to LiveKit Cloud, see File-mounted secrets . - For Google Gemini API, set the
GOOGLE_API_KEYenvironment variable.
Usage
Use Gemini within an AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.
from livekit.plugins import googlesession = AgentSession(llm=google.LLM(model="gemini-3-flash-preview",),# ... tts, stt, vad, turn_detection, etc.)
import * as google from '@livekit/agents-plugin-google';const session = new voice.AgentSession({llm: google.LLM(model: "gemini-3-flash-preview",),// ... tts, stt, vad, turn_detection, etc.});
Parameters
This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.
ChatModels | strOptionalDefault: gemini-3-flash-previewID of the model to use. For a full list, see Gemini models.
strOptionalEnv: GOOGLE_API_KEYAPI key for Google Gemini API.
strOptionalEnv: GOOGLE_CLOUD_PROJECTGoogle Cloud project to use (only if using Vertex AI). Required if using Vertex AI and the environment variable isn't set.
strOptionalEnv: GOOGLE_CLOUD_LOCATIONGoogle Cloud location to use (only if using Vertex AI). Required if using Vertex AI and the environment variable isn't set.
Provider tools
Google Gemini supports provider tools that enable the model to use built-in capabilities executed on the model server. These tools can be used alongside function tools defined in your agent's codebase.
Available tools include:
GoogleSearch: Perform keyword search, semantic search, user search, and thread fetch on GoogleGoogleMaps: Perform search for places and businesses using Google MapsURLContext: Provide context for URLsToolCodeExecution: Execute code snippets
Currently only the Gemini Live API supports using provider tools along with function tools.
When using text models, only provider tools or function tools can be used. See issue #53 for more details.
from livekit.plugins import googlefrom google.genai import typesagent = MyAgent(llm=google.LLM(model="gemini-2.5-flash",),tools=[google.tools.GoogleSearch()],)
import * as google from '@livekit/agents-plugin-google';// currently, Agents JS supports provider tools via `geminiTools` parameter.const agent = new MyAgent({llm: google.LLM(model: "gemini-2.5-flash",geminiTools: [new google.types.GoogleSearch()],),// ... tts, stt, vad, turn_detection, etc.});
Additional resources
The following resources provide more information about using Google Gemini with LiveKit Agents.