Skip to main content

Builds and Dockerfiles

Guide to the LiveKit Cloud build process, plus Dockerfile templates and resources.

Build process

LiveKit Cloud builds container images for your agents based on your code and Dockerfile, when you run lk agent create or lk agent deploy. This build occurs on the LiveKit Cloud build service. The process is as follows:

  1. Gather files: The CLI prepares a build context from your working directory, which is the directory you run the command from. To use a different directory, pass it explicitly, for example lk agent deploy /path/to/code.
  2. Exclusions: The build context excludes .env.* files and any files matched by .dockerignore or .gitignore.
  3. Upload: The CLI uploads the prepared build context to the LiveKit Cloud build service.
  4. Build: The build service uses your Dockerfile to create the container image, streaming logs to the CLI.

After the build is complete, deployment begins. See Deploying new versions for more information.

To view build logs, see Build logs.

Build timeout

Builds have a maximum duration of 10 minutes. Builds exceeding this limit are terminated and the deployment fails.

Build context size limit

The build context upload has a maximum size of 1 GB. If your build context exceeds this limit, the CLI returns an error similar to the following:

unable to deploy agent: multipart upload failed: failed to upload tarball:
400: Your proposed upload exceeds the maximum allowed size.

To reduce the size of your build context, add a .dockerignore or .gitignore file to exclude files that aren't needed for the build. Common sources of large build contexts include:

  • Model weights or other large assets checked into the repository. Download these during the image build instead. See Assets and models for more information.
  • Large datasets or media files used for testing or evaluation.
  • Virtual environments (.venv/, venv/, node_modules/).

See the templates section for recommended .dockerignore files.

Dockerfile

Most projects can use the default Dockerfile generated by the LiveKit CLI, which is based on the templates at the end of this section.

To create your own Dockerfile or modify the templates, refer to the following requirements and best practices:

  • Base image: Use a glibc-based image such as Debian or Ubuntu. Alpine (musl) is not supported.
    • LiveKit recommends using -slim images, which contain only the essential system packages for your runtime.
  • Unprivileged user: Do not run as the root user.
  • Working directory: Set an explicit WORKDIR (for example, /app).
  • Dependencies and caching:
    • Copy lockfiles and manifests first, install dependencies, then copy the rest of the source to maximize cache reuse.
    • Pin versions and use lockfiles.
  • System packages and layers:
    • Install required build tools up front.
    • Clean package lists (for example, /var/lib/apt/lists) to keep layers small.
  • Build time limit: Keep total build duration under 10 minutes; long builds fail due to the build timeout.
  • Secrets and configuration:
    • Do not copy .env* files or include secrets in the image.
    • Use LiveKit Cloud secrets management to inject any necessary secrets at runtime.
    • Do not set LIVEKIT_URL, LIVEKIT_API_KEY, or LIVEKIT_API_SECRET environment variables. These are injected at runtime by LiveKit Cloud.
  • Startup command: Provide a fixed ENTRYPOINT/CMD that directly launches the agent using the start command, without backgrounding or wrapper scripts.
  • Assets and models: Download models and other assets during the image build, not on first run, so containers start quickly. Use download-files to download assets required by LiveKit plugins.

Tips for Python projects

  • Use the uv package manager: This modern Rust-based package manager is faster than pip, and supports lockfiles.
  • The recommended base image for uv-based projects is ghcr.io/astral-sh/uv:python3.11-bookworm-slim (or another Python version).
  • The recommended base image for pip-based projects is python:3.11-slim (or another Python version).
  • Check your uv.lock file into source control. This ensures everyone on your team is using the same dependencies.
  • Install dependencies with uv sync --locked. This ensures that the dependencies in production always match your lockfile.

Tips for Node.js projects

  • Use the pnpm package manager: This modern package manager is faster and more efficient than npm, and it's the recommended way to manage Node.js dependencies.
  • The recommended base image for pnpm-based projects is node:22-slim (or another Node.js version).

Templates

These templates are automatically created by the LiveKit CLI to match your project type. They support both Python and Node.js projects.

The most up-to-date version of these templates is always available in the LiveKit CLI examples folder.

This template is offered for both uv and pip.

It assumes that your agent entrypoint is in agent.py. You can modify this path as needed.

# syntax=docker/dockerfile:1
# Use the official UV Python base image with Python 3.11 on Debian Bookworm
# UV is a fast Python package manager that provides better performance than pip
# We use the slim variant to keep the image size smaller while still having essential tools
ARG PYTHON_VERSION=3.13
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-bookworm-slim AS base
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# --- Build stage ---
# Install dependencies, build native extensions, and prepare the application
FROM base AS build
# Install build dependencies required for Python packages with native extensions
# gcc: C compiler needed for building Python packages with C extensions
# g++: C++ compiler needed for building Python packages with C++ extensions
# python3-dev: Python development headers needed for compilation
# We clean up the apt cache after installation to keep the image size down
RUN apt-get update && apt-get install -y \
gcc \
g++ \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Create a new directory for our application code
# And set it as the working directory
WORKDIR /app
# Copy just the dependency files first, for more efficient layer caching
COPY pyproject.toml uv.lock ./
# Install Python dependencies using UV's lock file
# --locked ensures we use exact versions from uv.lock for reproducible builds
# This creates a virtual environment and installs all dependencies
# Ensure your uv.lock file is checked in for consistency across environments
RUN uv sync --locked
# Copy all remaining application files into the container
# This includes source code, configuration files, and dependency specifications
# (Excludes files specified in .dockerignore)
COPY . .
# Pre-download any ML models or files the agent needs
# This ensures the container is ready to run immediately without downloading
# dependencies at runtime, which improves startup time and reliability
RUN uv run "agent.py" download-files
# --- Production stage ---
# Build tools (gcc, g++, python3-dev) are not included in the final image
FROM base
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/build/building/best-practices/#user
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/app" \
--shell "/sbin/nologin" \
--uid "${UID}" \
appuser
WORKDIR /app
# Copy the application and virtual environment with correct ownership in a single layer
# This avoids expensive recursive chown and excludes build tools from the final image
COPY --from=build --chown=appuser:appuser /app /app
# Switch to the non-privileged user for all subsequent operations
# This improves security by not running as root
USER appuser
# Run the application using UV
# UV will activate the virtual environment and run the agent.
# The "start" command tells the worker to connect to LiveKit and begin waiting for jobs.
CMD ["uv", "run", "agent.py", "start"]

This template uses pnpm and TypeScript but can be modified for other environments.

The Dockerfile assumes that your project contains build, download-files, and start scripts. See the package.json file template for examples.

# syntax=docker/dockerfile:1
# Use the official Node.js v22 base image
# We use the slim variant to keep the image size smaller while still having essential tools
ARG NODE_VERSION=22
FROM node:${NODE_VERSION}-slim AS base
# Configure pnpm installation directory and ensure it is on PATH
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
# Install required system packages and pnpm, then clean up the apt cache for a smaller image
# ca-certificates: enables TLS/SSL for securely fetching dependencies and calling HTTPS services
# --no-install-recommends keeps the image minimal
RUN apt-get update -qq && apt-get install --no-install-recommends -y ca-certificates && rm -rf /var/lib/apt/lists/*
# Pin pnpm version for reproducible builds
RUN npm install -g pnpm@10
# --- Build stage ---
# Install dependencies, build the project, and prepare production assets
FROM base AS build
# Create a new directory for our application code
# And set it as the working directory
WORKDIR /app
# Copy just the dependency files first, for more efficient layer caching
COPY package.json pnpm-lock.yaml ./
# Install dependencies using pnpm
# --frozen-lockfile ensures we use exact versions from pnpm-lock.yaml for reproducible builds
RUN pnpm install --frozen-lockfile
# Copy all remaining application files into the container
# This includes source code, configuration files, and dependency specifications
# (Excludes files specified in .dockerignore)
COPY . .
# Build the project
# Your package.json must contain a "build" script, such as `"build": "tsc"`
RUN pnpm build
# Pre-download any ML models or files the agent needs
# This ensures the container is ready to run immediately without downloading
# dependencies at runtime, which improves startup time and reliability
# Your package.json must contain a "download-files" script, such as `"download-files": "pnpm run build && node dist/agent.js download-files"`
RUN pnpm download-files
# Remove dev dependencies for a leaner production image
RUN pnpm prune --prod
# --- Production stage ---
FROM base
# Create a non-privileged user that the app will run under
# See https://docs.docker.com/build/building/best-practices/#user
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/app" \
--shell "/sbin/nologin" \
--uid "${UID}" \
appuser
WORKDIR /app
# Copy the built application with correct ownership in a single layer
# This avoids expensive recursive chown operations on node_modules
COPY --from=build --chown=appuser:appuser /app /app
USER appuser
# Set Node.js to production mode
ENV NODE_ENV=production
# Run the application
# The "start" command tells the worker to connect to LiveKit and begin waiting for jobs.
# Your package.json must contain a "start" script, such as `"start": "node dist/agent.js start"`
CMD [ "pnpm", "start" ]