Groq LLM integration guide

How to use the Groq LLM plugin for LiveKit Agents.

Overview

Groq provides fast LLM inference using open models from Llama, DeepSeek, and more. With LiveKit's Groq integration and the Agents framework, you can build low-latency voice AI applications.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the plugin from PyPI:

pip install "livekit-agents[groq]~=1.0rc"

Authentication

The Groq plugin requires a Groq API key.

Set GROQ_API_KEY in your .env file.

Usage

Use a Groq LLM in your AgentSession or as a standalone LLM service. For example, you can use this LLM in the Voice AI quickstart.

from livekit.plugins import groq
session = AgentSession(
llm=groq.LLM(
model="llama3-8b-8192"
),
# ... tts, stt, vad, turn_detection, etc.
)

Parameters

This section describes some of the available parameters. For a complete reference of all available parameters, see the plugin reference.

modelstringOptionalDefault: llama-3.3-70b-versatile

Name of the LLM model to use. For all options, see the Groq model list.

temperaturefloatOptionalDefault: 1.0

A measure of randomness in output. A lower value results in more predictable output, while a higher value results in more creative output.

parallel_tool_callsboolOptional

Set to true to parallelize tool calls.

tool_choiceToolChoice | Literal['auto', 'required', 'none']OptionalDefault: auto

Specifies whether to use tools during response generation.

Additional resources

The following resources provide more information about using Groq with LiveKit Agents.

Was this page helpful?