Overview
bitHuman provides realtime virtual avatars that run locally on CPU only for low latency and high quality. You can use the open source bitHuman integration for LiveKit Agents to add virtual avatars to your voice AI app.
Quick reference
This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.
Installation
Install the plugin from PyPI:
pip install "livekit-agents[bithuman]~=1.0"
Authentication
The bitHuman plugin requires a bitHuman API Secret.
Set BITHUMAN_API_SECRET
in your .env
file.
Model installation
The bitHuman integration requires a locally download model. Download a sample model or create your own.
Follow the guide below to pass it to the avatar session, or set BITHUMAN_MODEL_PATH
in your .env
file.
Usage
Use the plugin in an AgentSession
. For example, you can use this avatar in the Voice AI quickstart.
from livekit.plugins import bithumansession = AgentSession(# ... stt, llm, tts, etc.)avatar = bithuman.AvatarSession(model_path="./albert_einstein.imx", # This example uses a demo model installed in the current directory)# Start the avatar and wait for it to joinawait avatar.start(session, room=ctx.room)# Start your agent session with the userawait session.start(room=ctx.room,)
Parameters
This section describes some of the available parameters. See the plugin reference for a complete list of all available parameters.
Path to the bitHuman model to use. To learn more, see the bitHuman docs.
Additional resources
The following resources provide more information about using bitHuman with LiveKit Agents.