LiveKit has only three core constructs: a room, participant, and track. A room is simply a space (we originally contemplated calling it this, but stuck to the more familiar term) containing one or more participants. A participant can publish one or more tracks and/or subscribe to one or more tracks from another participant.
Room is a container object representing a LiveKit session.
Each participant in a room receives updates about changes to other participants in the same room. For example, when a participant adds, removes, or modifies the state (e.g. mute) of a track, other participants will be notified of this change. This is a powerful mechanism for synchronizing state, and fundamental to building any real-time experience.
A room can be created manually via server API, or automatically, when the first participant joins it. Once the last participant leaves a room, it will be closed after a short delay.
Participant is a user in a room. They are represented by a unique, client-provided
identity, and a server-generated
sid. A participant object also contains metadata about its state and tracks they've published.
A participant's identity is unique per room. Thus, if participants with the same identity join a room, only the most recent one to join will remain; the server automatically disconnects other participants using that identity.
There are two kinds of participants:
LocalParticipantrepresents the current user who, by default, can publish tracks in a room.
RemoteParticipantrepresents a remote user. The local participant, by default, can subscribe to any tracks published by a remote participant.
A participant may also exchange data with one or many other participants.
Track represents a stream of information, be it audio, video or custom data. By default, a participant in a room may publish tracks, such as their camera or microphone streams and subscribe to one or more tracks published by other participants. In order to model a track which may not be subscribed to by the local participant, all track objects have a corresponding
Track: a wrapper around the native WebRTC
MediaStreamTrack, representing a playable track.
TrackPublication: a track that's been published to the server. If the track is subscribed to by the local participant and available for playback locally, it will have a
.trackattribute representing the associated
We can now list and manipulate tracks (via track publications) published by other participants, even if the local participant is not subscribed to them.
TrackPublication contains metadata about its associated track:
|A uid for this particular track, generated by LiveKit server.
|The type of track, whether it be audio, video or arbitrary data.
|Source of media: Camera, Microphone, ScreenShare, or ScreenShareAudio.
|The name given to this particular track when initially published.
|Indicates whether or not this track has been subscribed to by the local participant.
|If the local participant is subscribed, the associated
Track object representing a WebRTC track.
|Whether this track is muted or not by the local participant. While muted, it won't receive new bytes from the server.
When a participant is subscribed to a track (which hasn't been muted by the publishing participant), they will continuously receive its data. If the participant unsubscribes, they will stop receiving media for that track and may resubscribe to it at any time.
When a participant creates or joins a room, the
autoSubscribe option is set to
true by default. This means the participant will automatically subscribe to all existing tracks being published and any track published in the future. For more fine-grained control over track subscriptions, you can set
false and instead use selective subscriptions.
For most use cases, muting a track on the publisher side or unsubscribing from it on the subscriber side is typically recommended over unpublishing it. Publishing a track requires a negotiation phase and consequently has worse time-to-first-byte performance.