LiveKit LogoDocs

Client / Publishing Tracks

Publishing media

On this page

Camera and microphoneScreen sharingPublishing from backendAdvanced track managementAudio on mobileMute and unmuteVideo simulcastDynamic broadcasting

Camera and microphone

It's simple to publish the local participant's camera and/or microphone streams to the room. We provide a consistent way to do this across platforms:

// Turns camera track on
room.localParticipant.setCameraEnabled(true)
// Turns microphone track on
room.localParticipant.setMicrophoneEnabled(true)

and to mute them, you can perform:

room.localParticipant.setCameraEnabled(false)
room.localParticipant.setMicrophoneEnabled(false)

Disabling camera or microphone will turn off their respective recording indicators. Other participants will receive a TrackMuted event.

Screen sharing

LiveKit also supports screen share natively on supported platforms.

// this will trigger browser prompt to share screen
await currentRoom.localParticipant.setScreenShareEnabled(true);

Publishing from backend

You can also publish media from your backend. Both our Go server SDK and CLI are capabile of publishing WebRTC-compatible streams to LiveKit.

Advanced track management

setCameraEnabled, setMicrophoneEnabled, and setScreenShareEnabled are convenience wrappers around our Track APIs, you could create tracks manually and publish or unpublish them at any time. There are no limits to the number of tracks a participant could publish.

LiveKit uses sane defaults for the tracks it publishes, but exposes knobs for you to fine tune for your application. These settings are organized into two categories:

  • Capture settings: how media is captured, including device selection and capabilities.
  • Publish settings: how it's encoded, including bitrate and framerate.
// option 1, set room defaults
const room = new Room({
audioCaptureDefaults: {
autoGainControl: true,
deviceId: '',
echoCancellation: true,
noiseSuppression: true,
},
videoCaptureDefaults: {
deviceId: '',
facingMode: 'user',
resolution: {
width: 1280,
height: 720,
frameRate: 30,
},
},
publishDefaults: {
videoEncoding: {
maxBitrate: 1_500_000,
maxFramerate: 30,
},
screenShareEncoding: {
maxBitrate: 1_500_000,
maxFramerate: 30,
},
audioBitrate: 20_000,
dtx: true,
// only needed if overriding defaults
videoSimulcastLayers: [
{
width: 640,
height: 360,
encoding: {
maxBitrate: 500_000,
maxFramerate: 20,
}
},
{
width: 320,
height: 180,
encoding: {
maxBitrate: 150_000,
maxFramerate: 15,
}
}
]
},
})
// option 2, settings for individual tracks
async function publishTracks() {
const videoTrack = await createLocalVideoTrack({
facingMode: "user",
// preset resolutions
resolution: VideoPresets.h720
})
const audioTrack = await createLocalAudioTrack({
echoCancellation: true,
noiseSuppression: true,
})
const videoPublication = await room.localParticipant.publishTrack(videoTrack)
const audioPublication = await room.localParticipant.publishTrack(audioTrack)
}

See options.ts for details.

Audio on mobile

When using audio with native iOS or Android SDKs, it's important to consider the device's audio stack, in order to behave well with other apps.

With the Flutter SDK, audio session is managed automatically.

On iOS, LiveKit provides automatic management of AVAudioSession. We ensure the minimal set of audio permissions are acquired (for example, microphone is turned off when it's not publishing).

It may be desirable to override the default settings by setting LiveKit.onShouldConfigureAudioSession:

LiveKit.onShouldConfigureAudioSession = { (_ newState: AudioTrack.TracksState,
_ oldState: AudioTrack.TracksState) -> Void in
let config = RTCAudioSessionConfiguration.webRTC()
config.category = AVAudioSession.Category.playAndRecord.rawValue
config.mode = AVAudioSession.Mode.videoChat.rawValue
config.categoryOptions = AVAudioSession.CategoryOptions.duckOthers
LiveKit.configureAudioSession(config, setActive: newState != .none)
}

Some combinations of `Category`, `Mode`, `RouteSharingPolicy`, and `CategoryOptions` are incompatible. `AVFoundation` documentation isn't very good, making it tricky to set the right combination of values for these properties.

Mute and unmute

You can mute any track to stop it from sending data to the server. When a track is muted, LiveKit will trigger a TrackMuted event on all participants in the room. You can use this event to update your app's UI and reflect the correct state to all users in the room.

Mute/unmute a track using its corresponding LocalTrackPublication object.

Video simulcast

With simulcast, a client publishes multiple versions of the same video track with varying bitrate profiles. This allows LiveKit to dynamically forward the stream that's most appropriate given each receiving participant's available bandwidth and desired resolution.

Adaptive layer selection takes place automatically in the SFU when the server detects a participant is bandwidth constrainted. When the participant's bandwidth improves, the server will then upgrade subscribed streams to higher resolutions.

For more information about Simulcast, see an introduction to WebRTC simulcast.

Simulcast is supported in all of LiveKit's client SDKs. It's enabled by default, and can be disabled in publish settings.

Dynamic broadcasting

LiveKit has end-to-end optimizations built-in to reduce bandwidth use. Dynamic broadcasting (dynacast) is an advanced feature that pauses publishing of video layers if they are not being consumed by a subscriber. It works for simulcasted video too: if subscribers are only consuming medium and low resolution streams, the publisher will pause publishing in high-res.

This feature can be enabled by setting dynacast: true in Room options.

Dynacast

Previous

Chevron IconConnecting to Rooms
LiveKit logo

Product

SFU

SDKs

Performance

Deployment