Built with LiveKit SFU
LiveKit Cloud builds on our open-source SFU. This means it supports the exact same client and server APIs as the open-source stack.
Maintaining compatibility with LiveKit's Open Source stack (OSS) is important to us. We didn't want any developer locked into using Cloud, or needing to integrate a different set of features, APIs or SDKs for their applications to work with it. Our design goal: a developer should be able to switch between Cloud or self-hosted without changing a line of code.
Distributed Mesh Architecture
In contrast to traditional WebRTC architectures, LiveKit Cloud runs multiple SFU instances in a mesh formation. We've developed capabilities for media servers to discover and connect to one another, in order to relay media between servers. This key capability allows us to bypass the single-server limitation that exists in traditional SFU and MCU architectures.
With a multi-home architecture, participants no longer need to connect to the same server. When participants from different regions join the same meeting, they'll each connect to the SFU closest to them, minimizing latency and transmission loss between the participant and SFU.
Each SFU instance establishes connections to other instances over optimized inter-data center networks. Inter-data center networks often run close to internet backbones, delivering high throughput with a minimal number of network hops.
Anything that can fail, will. LiveKit Cloud is designed to anticipate (and recover from) failures in every software and hardware component.
Layers of redundancy are built into the system. A media server failure is recovered from by moving impacted participants to another instance. We isolate shared infrastructure, like our message bus, to individual data centers.
When an entire data center fails, customer traffic is automatically migrated to the next closest data center. LiveKit's client SDKs will perform a "session migration": moving existing WebRTC sessions to a different media server without service interruption for your users.
To serve end users around the world, our infrastructure runs across multiple Cloud vendors and data centers. Today we have data centers in North America, Europe, and Asia, delivering under 100ms of latency for users in those regions.
Designed to scale
When you need to place many viewers on a media track, like in a livestream, LiveKit Cloud handles that capacity dynamically by forming a distribution mesh, similar to a CDN. It's important to note that this process happens automatically as your sessions scales up. There are no special configurations necessary. Every LiveKit Cloud project scales automatically.
The theoretical limits of this architecture is on the order of millions per room/session. For practical purposes, we've placed a limit of 100k simulteneous participants in the same session. If you have a real-time application operating at a scale larger than this, you can request a limit increase in your Cloud dashboard or get in touch with us.
For a deeper look into the design decisions we've made for LiveKit Cloud, you can read more on our blog.