Skip to main content

Speechmatics STT plugin guide

How to use the Speechmatics STT plugin for LiveKit Agents.

Available in
Python

Overview

This plugin allows you to use Speechmatics as an STT provider for your voice agents.

Installation

Install the plugin from PyPI:

uv add "livekit-agents[speechmatics]~=1.4"

Authentication

The Speechmatics plugin requires an API key.

Set SPEECHMATICS_API_KEY in your .env file.

Usage

Use Speechmatics STT in an AgentSession or as a standalone transcription service. For example, you can use this STT in the Voice AI quickstart.

from livekit.plugins import speechmatics
session = AgentSession(
stt = speechmatics.STT(),
# ... llm, tts, etc.
)

Speaker diarization

You can enable speaker diarization to identify individual speakers and their speech. When enabled, the transcription output can include this information through the speaker_id and text attributes.

See the following for example configurations and outputs:

  • <{speaker_id}>{text}</{speaker_id}>: <S1>Hello</S1>.
  • [Speaker {speaker_id}] {text}: [Speaker S1] Hello.
stt = speechmatics.STT(
enable_diarization=True,
speaker_active_format="<{speaker_id}>{text}</{speaker_id}>",
)

Use the MultiSpeakerAdapter to detect the primary speaker and format the transcripts by speaker. See the Speaker diarization and primary speaker detection section for more details.

Parameters

This section describes the key parameters for the Speechmatics STT plugin. See the plugin reference for a complete list of all available parameters.

operating_pointstringOptionalDefault: enhanced

Operating point to use for the transcription. This parameter balances accuracy, speed, and resource usage. To learn more, see Operating points.

languagestringOptionalDefault: en

ISO 639-1 language code. All languages are global, meaning that regardless of which language you select, the system can recognize different dialects and accents. To see the full list, see Supported Languages.

enable_partialsboolOptionalDefault: true

Enable partial transcripts. Partial transcripts allow you to receive preliminary transcriptions and update as more context is available until the higher-accuracy final transcript is returned. Partials are returned faster but without any post-processing such as formatting. When enabled, the STT service emits INTERIM_TRANSCRIPT events.

enable_diarizationboolOptionalDefault: false

Enable speaker diarization. When enabled, spoken words are attributed to unique speakers. You can use the speaker_sensitivity parameter to adjust the sensitivity of diarization. To learn more, see Diarization.

max_delaynumberOptionalDefault: 1.0

The maximum delay in seconds between the end of a spoken word and returning the final transcript results. Lower values can have an impact on accuracy.

end_of_utterance_silence_triggerfloatOptional

The duration of silence in seconds that triggers end of utterance. The delay is used to wait for any further transcribed words before emitting FINAL_TRANSCRIPT events.

turn_detection_modeTurnDetectionModeOptionalDefault: TurnDetectionMode.ADAPTIVE

Controls how the STT engine detects the end of speech turns. Valid values are:

  • TurnDetectionMode.ADAPTIVE: Uses simple voice activity detection (VAD) to determine end of speech.
  • TurnDetectionMode.SMART_TURN: Uses the plugin's built-in ML-based smart turn detection for more accurate endpointing.
  • TurnDetectionMode.FIXED: Uses a fixed silence duration as determined by the end_of_utterance_silence_trigger parameter.
  • TurnDetectionMode.EXTERNAL: Turn boundaries are controlled manually, for example via an external VAD or the finalize() method.

To use LiveKit's end of turn detector model, set this parameter to TurnDetectionMode.EXTERNAL.

speaker_active_formatstringOptional

Formatter for speaker identification in transcription output. The following attributes are available:

  • {speaker_id}: The ID of the speaker.
  • {text}: The text spoken by the speaker.

By default, if speaker diarization is enabled and this parameter is not set, the transcription output is not formatted for speaker identification.

The system instructions for the language model might need to include any necessary instructions to handle the formatting. To learn more, see Speaker diarization.

diarization_sensitivityfloatOptionalDefault: 0.5

Sensitivity of speaker detection. Valid values are between 0 and 1. Higher values increase sensitivity and can help when two or more speakers have similar voices. To learn more, see Speaker sensitivity.

The enable_diarization parameter must be set to True for this parameter to take effect.

prefer_current_speakerboolOptionalDefault: false

When speaker diarization is enabled and this is set to True, it reduces the likelihood of switching between similar sounding speakers. To learn more, see Prefer current speaker.

Additional resources

The following resources provide more information about using Speechmatics with LiveKit Agents.

Voice AI quickstart

Get started with LiveKit Agents and Speechmatics STT.