Camera & microphone

Publish realtime audio and video from any device.

Overview

LiveKit includes a simple and consistent method to publish the user's camera and microphone, regardless of the device or browser they are using. In all cases, LiveKit displays the correct indicators when recording is active and acquires the necessary permissions from the user.

// Enables the camera and publishes it to a new video track
room.localParticipant.setCameraEnabled(true);
// Enables the microphone and publishes it to a new audio track
room.localParticipant.setMicrophoneEnabled(true);

Device permissions

In native and mobile apps, you typically need to acquire consent from the user to access the microphone or camera. LiveKit integrates with the system privacy settings to record permission and display the correct indicators when audio or video capture is active.

For web browsers, the user is automatically prompted to grant camera and microphone permissions the first time your app attempts to access them and no additional configuration is required.

Add these entries to your Info.plist:

<key>NSCameraUsageDescription</key>
<string>$(PRODUCT_NAME) uses your camera</string>
<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) uses your microphone</string>

To enable background audio, you must also add the "Background Modes" capability with "Audio, AirPlay, and Picture in Picture" selected.

Your Info.plist should have:

<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>

Mute and unmute

You can mute any track to stop it from sending data to the server. When a track is muted, LiveKit will trigger a TrackMuted event on all participants in the room. You can use this event to update your app's UI and reflect the correct state to all users in the room.

Mute/unmute a track using its corresponding LocalTrackPublication object.

Track permissions

By default, any published track can be subscribed to by all participants. However, publishers can restrict who can subscribe to their tracks using Track Subscription Permissions:

localParticipant.setTrackSubscriptionPermissions(false, [
{
participantIdentity: 'allowed-identity',
allowAll: true,
},
]);

Publishing from backend

You may also publish audio/video tracks from a backend process, which can be consumed just like any camera or microphone track. The LiveKit Agents framework makes it easy to add a programmable participant to any room, and publish media such as synthesized speech or video.

LiveKit also includes complete SDKs for server environments in Go, Rust, Python, and Node.js.

You can also publish media using the LiveKit CLI.

Audio and video synchronization

Note

AVSynchronizer is currently only available in Python.

While WebRTC handles A/V sync natively, some scenarios require manual synchronization - for example, when synchronizing generated video with voice output.

The AVSynchronizer utility helps maintain sync by aligning the first audio and video frames. Subsequent frames are automatically synchronized based on configured video FPS and audio sample rate.

For implementation examples, see the video stream examples in our GitHub repository.