Overview
bitHuman provides realtime virtual avatars that you can run either locally or in the cloud. You can use the open source bitHuman integration for LiveKit Agents to add virtual avatars to your voice AI app.
Quick reference
This section includes a basic usage example and some reference material. For links to more detailed documentation, see Additional resources.
Installation
Install the plugin from PyPI:
pip install "livekit-agents[bithuman]~=1.2"
If you plan to use cloud-hosted models with images, also install the LiveKit images dependency, which includes Pillow version 10.3 and above:
pip install "livekit-agents[images]"
Authentication
The bitHuman plugin requires a bitHuman API Secret.
Set BITHUMAN_API_SECRET
in your .env
file.
Avatar setup
The bitHuman plugin supports three ways to set up avatars:
- pass
.imx
model files - pass an image directly using PIL image objects or a source image path/URL
- pass bitHuman avatar IDs
Pass model files
Create and download a bitHuman .imx
file from the bitHuman ImagineX console. You can pass the model path to the avatar session or set the BITHUMAN_MODEL_PATH
environment variable.
Agents consume more CPU when using .imx
models directly.
Pass image directly
Pass an image directly in the avatar_image
parameter using PIL image objects or a source image path/URL.
from PIL import Imagefrom livekit.plugins import bithumanbithuman_avatar = bithuman.AvatarSession(avatar_image=Image.open(os.path.join(os.path.dirname(__file__), "avatar.jpg")),)
The image can come from anywhere, including your local filesystem, a remote URL, uploaded in realtime from your frontend or generated by an external API or AI model.
Pass avatar ID
You can use an existing avatar by passing the avatar_id
parameter to the plugin. You can find the ID in the bitHuman ImagineX console in the description of the avatar on the My Avatars page.
Usage
You can use the bitHuman plugin in an AgentSession
. For example, you can use this avatar in the Voice AI quickstart.
You can preview your agent in the Agents Playground or a frontend starter app that you build.
The following code uses a local bitHuman .imx
model.
from livekit.plugins import bithumansession = AgentSession(# ... stt, llm, tts, etc.)avatar = bithuman.AvatarSession(model_path="./albert_einstein.imx", # This example uses a demo model installed in the current directory)# Start the avatar and wait for it to joinawait avatar.start(session, room=ctx.room)# Start your agent session with the userawait session.start(room=ctx.room,)
The following code uses an image or avatar ID.
from livekit.plugins import bithumanfrom PIL import Imageavatar = bithuman.AvatarSession(avatar_image=Image.open("avatar.jpg").convert("RGB"), # This example uses an image in the current directory.# or: avatar_id="your-avatar-id" # You can also use an existing avatar ID.)await avatar.start(session, room=ctx.room)await session.start(room=ctx.room,room_output_options=RoomOutputOptions(audio_enabled=False),)
Parameters
This section describes some of the available parameters. See the plugin reference for a complete list of all available parameters.
Model to use. expression
provides dynamic expressions and emotional responses. essence
uses predefined actions and expressions.
Path to the bitHuman .imx
model.
Avatar image to use. Pass a PIL image (Image.open("avatar.jpg")
) or a string (local path to the image).
The avatar ID from bitHuman.
Additional resources
The following resources provide more information about using bitHuman with LiveKit Agents.
Python package
The livekit-plugins-bithuman
package on PyPI.
Plugin reference
Reference for the bitHuman avatar plugin.
GitHub repo
View the source or contribute to the LiveKit bitHuman avatar plugin.
bitHuman docs
bitHuman's full API docs site.
Agents Playground
A virtual workbench to test your avatar agent.
Frontend starter apps
Ready-to-use frontend apps with avatar support.