Skip to main content

bitHuman virtual avatar integration guide

How to use the bitHuman virtual avatar plugin for LiveKit Agents.

Available in
Python

Overview

bitHuman provides realtime virtual avatars that run locally on CPU only for low latency and high quality. You can use the open source bitHuman integration for LiveKit Agents to add virtual avatars to your voice AI app.

Quick reference

This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.

Installation

Install the plugin from PyPI:

pip install "livekit-agents[bithuman]~=1.2"

Authentication

The bitHuman plugin requires a bitHuman API Secret.

Set BITHUMAN_API_SECRET in your .env file.

Model installation

Each bitHuman avatar comes as a .imx file, which you must download locally for your agent. You can create and download avatar models from the bitHuman ImagineX console.

You can pass the model path to the avatar session, or set the BITHUMAN_MODEL_PATH environment variable.

Usage

Use the plugin in an AgentSession. For example, you can use this avatar in the Voice AI quickstart.

from livekit.plugins import bithuman
session = AgentSession(
# ... stt, llm, tts, etc.
)
avatar = bithuman.AvatarSession(
model_path="./albert_einstein.imx", # This example uses a demo model installed in the current directory
)
# Start the avatar and wait for it to join
await avatar.start(session, room=ctx.room)
# Start your agent session with the user
await session.start(
room=ctx.room,
)

Preview the avatar in the Agents Playground or a frontend starter app that you build.

Parameters

This section describes some of the available parameters. See the plugin reference for a complete list of all available parameters.

model_pathstringRequiredEnv: BITHUMAN_MODEL_PATH

Path to the bitHuman model to use. To learn more, see the bitHuman docs.

Additional resources

The following resources provide more information about using bitHuman with LiveKit Agents.