Intro to LiveKit

An overview of the LiveKit platform.

LiveKit is a realtime platform that enables developers to build video, voice, and data capabilities into their applications. Building on WebRTC, it supports a broad range of frontend and backend platforms.

Problems LiveKit solves

  • Scalability: Building scalable realtime communication systems is challenging. LiveKit is an opinionated, horizontally-scaling WebRTC SFU (Selective Forwarding Unit) that allows applications to scale to support large numbers of concurrent users.
  • Complexity: Implementing WebRTC from scratch is complex, involving signaling, media handling, networking, and more. LiveKit solves these undifferentiated problems, presenting a consistent set of API primitives across platforms.
  • Fragmented solutions: Many WebRTC solutions lack advanced features like simulcast, selective subscription, SVC codecs, and end-to-end encryption. LiveKit provides these and more out-of-the-box.
  • Vendor lock-in: Proprietary WebRTC solutions can lead to vendor lock-in. As an open-source platform, LiveKit makes it easy to switch between our self-hosted and Cloud offerings.

How LiveKit works

LiveKit's architecture consists of the following key components.

  • LiveKit Server: This is the core component of the LiveKit platform. It acts as a Selective Forwarding Unit (SFU), handling signaling, media, and other realtime communication tasks.
  • SDKs: LiveKit provides full-featured web, native, and backend SDKs to abstract away WebRTC complexity for easier integration.
  • Egress: This component allows recording or live streaming of rooms and individual participant tracks.
  • Ingress: This enables ingesting external streams (such as RTMP and WHIP) into LiveKit rooms.
  • AI Agents: This server side framework allows building intelligent, AI-powered applications with realtime capabilities.
  • CLI: This utility allows you to create and manage LiveKit services, whether cloud or self-hosted.

Build a LiveKit implementation

Here's a high-level overview of the steps involved in building a LiveKit implementation:

  1. Set up LiveKit Server: Deploy the LiveKit Server binary or Docker image to your infrastructure (self-hosted), or use the LiveKit Cloud managed service.
  2. Create an Access Token server: Set up a mechanism to generate and validate access tokens (JWT) for secure room access using the LiveKit server APIs.
  3. Build your frontend: Develop the user interface for video conferencing, including camera and microphone access, participant lists, and screen sharing using the appropriate LiveKit SDK for your platform.
  4. Add advanced features (optional): Integrate additional LiveKit components like Egress, Ingress, or AI Agents based on your application's requirements.
  5. Test and optimize: Thoroughly test your implementation across different networks, devices, and loads. Optimize media server settings (CPU, bandwidth, codecs) as needed.
  6. Deploy and monitor: Deploy your LiveKit implementation to production and set up monitoring and logging for performance tracking and debugging.

Deployment considerations

When building a LiveKit implementation, you can either self-host the open-source LiveKit server, or use the managed LiveKit Cloud service. Here's a table comparing them.

open-sourceCloud
Realtime featuresFull supportFull support
Egress (recording, streaming)Full supportFull support
Ingress (RTMP, WHIP, SRT ingest)Full supportFull support
SIP (telephony integration)Full supportFull support
Agents frameworkFull supportFull support
Who manages itYouLiveKit
ArchitectureSingle-home SFUMesh SFU
Connection modelUsers in the same room connect to the same serverEach user connects to the closest server
Max users per roomUp to ~3,000No limit
Analytics & telemetry
Cloud dashboard
Uptime guarantees
99.99%