Camera and microphone
It's simple to publish the local participant's camera and/or microphone streams to the room. We provide a consistent way to do this across platforms:
// Turns camera track onroom.localParticipant.setCameraEnabled(true)// Turns microphone track onroom.localParticipant.setMicrophoneEnabled(true)
and to mute them, you can perform:
room.localParticipant.setCameraEnabled(false)room.localParticipant.setMicrophoneEnabled(false)
Disabling camera or microphone will turn off their respective recording indicators. Other participants will receive a TrackMuted
event.
Screen sharing
LiveKit also supports screen share natively across all platforms.
// this will trigger browser prompt to share screenawait currentRoom.localParticipant.setScreenShareEnabled(true);
On iOS, a Broadcast Extension is required to share screen of other apps. See iOS screen sharing guide
localParticipant.setScreenShare(enabled: true)
On Android, screen capture is performed using MediaProjectionManager
:
// create an intent launcher for screen capture// this *must* be registered prior to onCreate(), ideally as an instance valval screenCaptureIntentLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->val resultCode = result.resultCodeval data = result.dataif (resultCode != Activity.RESULT_OK || data == null) {return@registerForActivityResult}lifecycleScope.launch {room.localParticipant.setScreenShareEnabled(true, data)}}// when it's time to enable the screen share, perform the followingval mediaProjectionManager =getSystemService(MEDIA_PROJECTION_SERVICE) as MediaProjectionManagerscreenCaptureIntentLauncher.launch(mediaProjectionManager.createScreenCaptureIntent())
room.localParticipant.setScreenShareEnabled(true);
On Android, you would have to define a foreground service in your AndroidManifest.xml:
<manifest xmlns:android="http://schemas.android.com/apk/res/android"><application>...<serviceandroid:name="de.julianassmann.flutter_background.IsolateHolderService"android:enabled="true"android:exported="false"android:foregroundServiceType="mediaProjection" /></application></manifest>
On iOS, follow this guide to set up a Broadcast Extension.
yield return currentRoom.LocalParticipant.SetScreenShareEnabled(true);
Publishing from backend
You can also publish media from your backend. The following SDKs & tools will enable you to publish from server environments:
Advanced track management
setCameraEnabled
, setMicrophoneEnabled
, and setScreenShareEnabled
are convenience wrappers around our Track APIs, you could create tracks manually and publish or unpublish them at any time. There are no limits to the number of tracks a participant could publish.
LiveKit uses sane defaults for the tracks it publishes, but exposes knobs for you to fine tune for your application. These settings are organized into two categories:
- Capture settings: how media is captured, including device selection and capabilities.
- Publish settings: how it's encoded, including bitrate and framerate.
// option 1, set room defaultsconst room = new Room({audioCaptureDefaults: {autoGainControl: true,deviceId: '',echoCancellation: true,noiseSuppression: true,},videoCaptureDefaults: {deviceId: '',facingMode: 'user',resolution: {width: 1280,height: 720,frameRate: 30,},},publishDefaults: {videoEncoding: {maxBitrate: 1_500_000,maxFramerate: 30,},screenShareEncoding: {maxBitrate: 1_500_000,maxFramerate: 30,},audioBitrate: 20_000,dtx: true,// only needed if overriding defaultsvideoSimulcastLayers: [{width: 640,height: 360,encoding: {maxBitrate: 500_000,maxFramerate: 20,}},{width: 320,height: 180,encoding: {maxBitrate: 150_000,maxFramerate: 15,}}]},})// option 2, settings for individual tracksasync function publishTracks() {const videoTrack = await createLocalVideoTrack({facingMode: "user",// preset resolutionsresolution: VideoPresets.h720})const audioTrack = await createLocalAudioTrack({echoCancellation: true,noiseSuppression: true,})const videoPublication = await room.localParticipant.publishTrack(videoTrack)const audioPublication = await room.localParticipant.publishTrack(audioTrack)}
See options.ts for details.
// option 1: set room defaultsvar room = Room(delegate: self,roomOptions: RoomOptions(defaultCameraCaptureOptions: CameraCaptureOptions(position: .front,dimensions: .h720_169,fps: 30,),defaultAudioCaptureOptions: AudioCaptureOptions(echoCancellation: true,noiseSuppression: true,autoGainControl: true,typingNoiseDetection: true,highpassFilter: true,)defaultVideoPublishOptions: VideoPublishOptions(encoding: VideoEncoding(maxBitrate: 1_500_000,maxFps: 30,),simulcastLayers: [VideoParameters.presetH180_169,VideoParameters.presetH360_169,]),defaultAudioPublishOptions: AudioPublishOptions(bitrate: 20_000,dtx: true,),adaptiveStream: true,dynacast: true,),)// option 2: set specifically for each tracklet videoTrack = try LocalVideoTrack.createCameraTrack(options: CameraCaptureOptions(position: .front,dimensions: .h720_169,fps: 30,))let audioTrack = LocalAudioTrack.createTrack(options: AudioCaptureOptions(echoCancellation: true,noiseSuppression: true,))let videoPublication = localParticipant.publishVideoTrack(track: videoTrack)let audioPublication = localParticipant.publishAudioTrack(track: audioTrack)
For convenience, LiveKit provides a few preset resolutions when creating a video track. You also have control over the encoding bitrate with publishing options.
When creating audio tracks, you have control over the capture settings.
// option 1: set room defaultsval options = RoomOptions(audioTrackCaptureDefaults = LocalAudioTrackOptions(noiseSuppression = true,echoCancellation = true,autoGainControl = true,highPassFilter = true,typingNoiseDetection = true,),videoTrackCaptureDefaults = LocalVideoTrackOptions(deviceId = "",position = CameraPosition.FRONT,captureParams = VideoPreset169.HD.capture,),audioTrackPublishDefaults = AudioTrackPublishDefaults(audioBitrate = 20_000,dtx = true,),videoTrackPublishDefaults = VideoTrackPublishDefaults(videoEncoding = VideoPreset169.HD.encoding,))var room = LiveKit.connect(...roomOptions = options,)// option 2: create tracks manuallyval localParticipant = room.localParticipantval audioTrack = localParticipant.createAudioTrack("audio")localParticipant.publishAudioTrack(audioTrack)val videoTrack = localParticipant.createVideoTrack("video", LocalVideoTrackOptions(CameraPosition.FRONT,VideoPreset.QHD.capture))localParticipant.publishVideoTrack(videoTrack)
For convenience, LiveKit provides a few preset resolutions when creating a video track.
When creating audio tracks, you have control over the capture settings.
// option 1: set room defaultsvar room = Room(roomOptions: RoomOptions(defaultCameraCaptureOptions: CameraCaptureOptions(deviceId: '',cameraPosition: CameraPosition.front,params: VideoParametersPresets.h720_169,),defaultAudioCaptureOptions: AudioCaptureOptions(deviceId: '',noiseSuppression: true,echoCancellation: true,autoGainControl: true,highPassFilter: true,typingNoiseDetection: true,),defaultVideoPublishOptions: VideoPublishOptions(videoEncoding: VideoParametersPresets.h720_169.encoding,videoSimulcastLayers: [VideoParametersPresets.h180_169,VideoParametersPresets.h360_169,],),defaultAudioPublishOptions: AudioPublishOptions(dtx: true,)),)// option 2: create tracks individuallytry {// video will fail when running in ios simulatorvar localVideo = await LocalVideoTrack.createCameraTrack(LocalVideoTrackOptions(position: CameraPosition.FRONT,params: VideoParametersPresets.h720_169,));await room.localParticipant.publishVideoTrack(localVideo);} catch (e) {print('could not publish video: $e');}var localAudio = await LocalAudioTrack.createTrack();await room.localParticipant.publishAudioTrack(localAudio);
The Go SDK makes it simple to publish static files or media from other sources to a room.
To publish files, they must first be encoded into the right format.
// publishing non-simulcast track from filefile := "video.ivf"track, err := lksdk.NewLocalFileTrack(file,// control FPS to ensure synchronizationlksdk.FileTrackWithFrameDuration(33 * time.Millisecond),lksdk.FileTrackWithOnWriteComplete(func() { fmt.Println("track finished") }),)if err != nil {return err}if _, err = room.LocalParticipant.PublishTrack(track, &lksdk.TrackPublicationOptions{Name: name,Source: livekit.TrackSource_CAMERA,}); err != nil {return err}// publishing simulcast track from custom sample providerscodec := &webrtc.RTPCodecCapability{MimeType: "video/vp8",ClockRate: 90000,RTCPFeedback: []webrtc.RTCPFeedback{{Type: webrtc.TypeRTCPFBNACK},{Type: webrtc.TypeRTCPFBNACK, Parameter: "pli"},},}var tracks []*lksdk.LocalSampleTracktrack1, err := lksdk.NewLocalSampleTrack(codec, &lksdk.WithSimulcast("test-video", &livekit.VideoLayer{Quality: livekit.VideoQuality_HIGH,Width: 1280,Height: 720,})if err != nil {panic(err)}if err := track1.StartWrite(yourSampleProvider, nil); err != nil {panic(err)}tracks = append(tracks, track1)// also add tracks for VideoQuality_MEDIUM and VideoQuality_LOW...// then publish together-, err := t.room.LocalParticipant.PublishSimulcastTrack(tracks, &lksdk.TrackPublicationOptions{Name: name,Source: livekit.TrackSource_CAMERA,})
var videoTrack = Client.CreateLocalVideoTrack(new VideoCaptureOptions(){FacingMode = new ConstrainDOMString() {Ideal = new string[] {FacingMode.User}},Resolution = VideoPresets.H720.GetResolution()});yield return videoTrack;var audioTrack = Client.CreateLocalAudioTrack(new AudioCaptureOptions(){EchoCancellation = true,NoiseSuppression = new ConstrainBoolean() {Ideal = true}});yield return audioTrack;yield return Room.LocalParticipant.PublishTrack(videoTrack.ResolveValue);yield return Room.LocalParticipant.PublishTrack(audioTrack.ResolveValue);
Mute and unmute
You can mute any track to stop it from sending data to the server. When a track is muted, LiveKit will trigger a TrackMuted
event on all participants in the room. You can use this event to update your app's UI and reflect the correct state to all users in the room.
Mute/unmute a track using its corresponding LocalTrackPublication
object.
Video simulcast
Simulcast enables a client to publish multiple versions of the same video track, each with a different bitrate profile. This feature allows LiveKit to dynamically forward the most suitable stream based on each recipient's available bandwidth and preferred resolution.
Automatic adaptive layer selection occurs within the Selective Forwarding Unit (SFU) when the server identifies a participant with bandwidth constraints. As the participant's bandwidth improves, the server would upgrade the subscribed streams to higher resolutions accordingly.
For more information about Simulcast, see an introduction to WebRTC simulcast.
Simulcast is supported in all of LiveKit's client SDKs. It's enabled by default, and can be disabled in publish settings.
Dynamic broadcasting
LiveKit is designed with end-to-end optimizations to minimize bandwidth consumption. Dynamic Broadcasting (Dynacast), automatically pauses the publication of video layers when they are not consumed by subscribers. This functionality extends to simulcasted video as well: if subscribers only consume medium and low-resolution layers, the high-resolution publication will be paused.
This feature can be enabled by setting dynacast: true
in Room options.
Subscription permissions
By default, any track published to a Room can be subscribed to by all participants.
In certain situations, a publisher may want to limit who can subscribe to the tracks they are publishing. For instance, when two individuals wish to engage in a private conversation within a larger meeting.
For those use cases, Track Subscription Permissions provide the means for publishers to specify who is allowed to subscribe to their tracks.
localParticipant.setTrackSubscriptionPermissions(false, [{participantIdentity: "allowed-identity",allowAll: true,}])
localParticipant.setTrackSubscriptionPermissions(allParticipantsAllowed: false,trackPermissions: [ParticipantTrackPermission(participantSid: "allowed-sid", allTracksAllowed: true)])
localParticipant.setTrackSubscriptionPermissions(false, listOf(ParticipantTrackPermission(participantIdentity = "allowed-identity", allTracksAllowed = true),))
localParticipant.setTrackSubscriptionPermissions(allParticipantsAllowed: false,trackPermissions: [const ParticipantTrackPermission('allowed-identity', true, null)],);
yield return localParticipant.SetTrackSubscriptionPermissions(false, new ParticipantTrackPermission[]{new ParticipantTrackPermission{ParticipantIdentity = "allowed-identity",AllowAll = true,}});