Camera and microphone
It's simple to publish the local participant's camera and/or microphone streams to the room. We provide a consistent way to do this across platforms:
// Turns camera track onroom.localParticipant.setCameraEnabled(true)// Turns microphone track onroom.localParticipant.setMicrophoneEnabled(true)
and to mute them, you can perform:
room.localParticipant.setCameraEnabled(false)room.localParticipant.setMicrophoneEnabled(false)
Disabling camera or microphone will turn off their respective recording indicators. Other participants will receive a TrackMuted
event.
Screen sharing
LiveKit also supports screen share natively on supported platforms.
// this will trigger browser prompt to share screenawait currentRoom.localParticipant.setScreenShareEnabled(true);
On Android, screen capture is performed using MediaProjectionManager
:
// create an intent launcher for screen capture// this *must* be registered prior to onCreate(), ideally as an instance valval screenCaptureIntentLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->val resultCode = result.resultCodeval data = result.dataif (resultCode != Activity.RESULT_OK || data == null) {return@registerForActivityResult}lifecycleScope.launch {room.localParticipant.setScreenShareEnabled(true, data)}}// when it's time to enable the screen share, perform the followingval mediaProjectionManager =getSystemService(MEDIA_PROJECTION_SERVICE) as MediaProjectionManagerscreenCaptureIntentLauncher.launch(mediaProjectionManager.createScreenCaptureIntent())
room.localParticipant.setScreenShareEnabled(true);
On Android, you would have to define a foreground service in your AndroidManifest.xml:
<manifest xmlns:android="http://schemas.android.com/apk/res/android"><application>...<serviceandroid:name="de.julianassmann.flutter_background.IsolateHolderService"android:enabled="true"android:exported="false"android:foregroundServiceType="mediaProjection" /></application></manifest>
yield return currentRoom.LocalParticipant.SetScreenShareEnabled(true);
Publishing from backend
You can also publish media from your backend. Both our Go server SDK and CLI are capabile of publishing WebRTC-compatible streams to LiveKit.
Advanced track management
setCameraEnabled
, setMicrophoneEnabled
, and setScreenShareEnabled
are convenience wrappers around our Track APIs, you could create tracks manually and publish or unpublish them at any time. There are no limits to the number of tracks a participant could publish.
LiveKit uses sane defaults for the tracks it publishes, but exposes knobs for you to fine tune for your application. These settings are organized into two categories:
- Capture settings: how media is captured, including device selection and capabilities.
- Publish settings: how it's encoded, including bitrate and framerate.
// option 1, set room defaultsconst room = new Room({audioCaptureDefaults: {autoGainControl: true,deviceId: '',echoCancellation: true,noiseSuppression: true,},videoCaptureDefaults: {deviceId: '',facingMode: 'user',resolution: {width: 1280,height: 720,frameRate: 30,},},publishDefaults: {videoEncoding: {maxBitrate: 1_500_000,maxFramerate: 30,},screenShareEncoding: {maxBitrate: 1_500_000,maxFramerate: 30,},audioBitrate: 20_000,dtx: true,// only needed if overriding defaultsvideoSimulcastLayers: [{width: 640,height: 360,encoding: {maxBitrate: 500_000,maxFramerate: 20,}},{width: 320,height: 180,encoding: {maxBitrate: 150_000,maxFramerate: 15,}}]},})// option 2, settings for individual tracksasync function publishTracks() {const videoTrack = await createLocalVideoTrack({facingMode: "user",// preset resolutionsresolution: VideoPresets.h720})const audioTrack = await createLocalAudioTrack({echoCancellation: true,noiseSuppression: true,})const videoPublication = await room.localParticipant.publishTrack(videoTrack)const audioPublication = await room.localParticipant.publishTrack(audioTrack)}
See options.ts for details.
// option 1: set room defaultsvar room = Room(delegate: self,roomOptions: RoomOptions(defaultCameraCaptureOptions: CameraCaptureOptions(position: .front,dimensions: .h720_169,fps: 30,),defaultAudioCaptureOptions: AudioCaptureOptions(echoCancellation: true,noiseSuppression: true,autoGainControl: true,typingNoiseDetection: true,highpassFilter: true,)defaultVideoPublishOptions: VideoPublishOptions(encoding: VideoEncoding(maxBitrate: 1_500_000,maxFps: 30,),simulcastLayers: [VideoParameters.presetH180_169,VideoParameters.presetH360_169,]),defaultAudioPublishOptions: AudioPublishOptions(bitrate: 20_000,dtx: true,),adaptiveStream: true,dynacast: true,),)// option 2: set specifically for each tracklet videoTrack = try LocalVideoTrack.createCameraTrack(options: CameraCaptureOptions(position: .front,dimensions: .h720_169,fps: 30,))let audioTrack = LocalAudioTrack.createTrack(options: AudioCaptureOptions(echoCancellation: true,noiseSuppression: true,))let videoPublication = localParticipant.publishVideoTrack(track: videoTrack)let audioPublication = localParticipant.publishAudioTrack(track: audioTrack)
For convenience, LiveKit provides a few preset resolutions when creating a video track. You also have control over the encoding bitrate with publishing options.
When creating audio tracks, you have control over the capture settings.
// option 1: set room defaultsval options = RoomOptions(audioTrackCaptureDefaults = LocalAudioTrackOptions(noiseSuppression = true,echoCancellation = true,autoGainControl = true,highPassFilter = true,typingNoiseDetection = true,),videoTrackCaptureDefaults = LocalVideoTrackOptions(deviceId = "",position = CameraPosition.FRONT,captureParams = VideoPreset169.HD.capture,),audioTrackPublishDefaults = AudioTrackPublishDefaults(audioBitrate = 20_000,dtx = true,),videoTrackPublishDefaults = VideoTrackPublishDefaults(videoEncoding = VideoPreset169.HD.encoding,))var room = LiveKit.connect(...roomOptions = options,)// option 2: create tracks manuallyval localParticipant = room.localParticipantval audioTrack = localParticipant.createAudioTrack("audio")localParticipant.publishAudioTrack(audioTrack)val videoTrack = localParticipant.createVideoTrack("video", LocalVideoTrackOptions(CameraPosition.FRONT,VideoPreset.QHD.capture))localParticipant.publishVideoTrack(videoTrack)
For convenience, LiveKit provides a few preset resolutions when creating a video track.
When creating audio tracks, you have control over the capture settings.
// option 1: set room defaultsvar room = Room(roomOptions: RoomOptions(defaultCameraCaptureOptions: CameraCaptureOptions(deviceId: '',cameraPosition: CameraPosition.front,params: VideoParametersPresets.h720_169,),defaultAudioCaptureOptions: AudioCaptureOptions(deviceId: '',noiseSuppression: true,echoCancellation: true,autoGainControl: true,highPassFilter: true,typingNoiseDetection: true,),defaultVideoPublishOptions: VideoPublishOptions(videoEncoding: VideoParametersPresets.h720_169.encoding,videoSimulcastLayers: [VideoParametersPresets.h180_169,VideoParametersPresets.h360_169,],),defaultAudioPublishOptions: AudioPublishOptions(dtx: true,)),)// option 2: create tracks individuallytry {// video will fail when running in ios simulatorvar localVideo = await LocalVideoTrack.createCameraTrack(LocalVideoTrackOptions(position: CameraPosition.FRONT,params: VideoParametersPresets.h720_169,));await room.localParticipant.publishVideoTrack(localVideo);} catch (e) {print('could not publish video: $e');}var localAudio = await LocalAudioTrack.createTrack();await room.localParticipant.publishAudioTrack(localAudio);
The Go SDK makes it simple to publish static files or media from other sources to a room.
To publish files, they must first be encoded into the right format.
// publishing non-simulcast track from filefile := "video.ivf"track, err := lksdk.NewLocalFileTrack(file,// control FPS to ensure synchronizationlksdk.FileTrackWithFrameDuration(33 * time.Millisecond),lksdk.FileTrackWithOnWriteComplete(func() { fmt.Println("track finished") }),)if err != nil {return err}if _, err = room.LocalParticipant.PublishTrack(track, &lksdk.TrackPublicationOptions{Name: name,Source: livekit.TrackSource_CAMERA,}); err != nil {return err}// publishing simulcast track from custom sample providerscodec := &webrtc.RTPCodecCapability{MimeType: "video/vp8",ClockRate: 90000,RTCPFeedback: []webrtc.RTCPFeedback{{Type: webrtc.TypeRTCPFBNACK},{Type: webrtc.TypeRTCPFBNACK, Parameter: "pli"},},}var tracks []*lksdk.LocalSampleTracktrack1, err := lksdk.NewLocalSampleTrack(codec, &lksdk.WithSimulcast("test-video", &livekit.VideoLayer{Quality: livekit.VideoQuality_HIGH,Width: 1280,Height: 720,})if err != nil {panic(err)}if err := track1.StartWrite(yourSampleProvider, nil); err != nil {panic(err)}tracks = append(tracks, track1)// also add tracks for VideoQuality_MEDIUM and VideoQuality_LOW...// then publish together-, err := t.room.LocalParticipant.PublishSimulcastTrack(tracks, &lksdk.TrackPublicationOptions{Name: name,Source: livekit.TrackSource_CAMERA,})
var videoTrack = Client.CreateLocalVideoTrack(new VideoCaptureOptions(){FacingMode = new ConstrainDOMString() {Ideal = new string[] {FacingMode.User}},Resolution = VideoPresets.H720.GetResolution()});yield return videoTrack;var audioTrack = Client.CreateLocalAudioTrack(new AudioCaptureOptions(){EchoCancellation = true,NoiseSuppression = new ConstrainBoolean() {Ideal = true}});yield return audioTrack;yield return Room.LocalParticipant.PublishTrack(videoTrack.ResolveValue);yield return Room.LocalParticipant.PublishTrack(audioTrack.ResolveValue);
Audio on mobile
When using audio with native iOS or Android SDKs, it's important to consider the device's audio stack, in order to behave well with other apps.
With the Flutter SDK, audio session is managed automatically.
On iOS, LiveKit provides automatic management of AVAudioSession
. We ensure the minimal set of audio permissions are acquired (for example, microphone is turned off when it's not publishing).
It may be desirable to override the default settings by setting LiveKit.onShouldConfigureAudioSession
:
LiveKit.onShouldConfigureAudioSession = { (_ newState: AudioTrack.TracksState,_ oldState: AudioTrack.TracksState) -> Void inlet config = RTCAudioSessionConfiguration.webRTC()config.category = AVAudioSession.Category.playAndRecord.rawValueconfig.mode = AVAudioSession.Mode.videoChat.rawValueconfig.categoryOptions = AVAudioSession.CategoryOptions.duckOthersLiveKit.configureAudioSession(config, setActive: newState != .none)}
Some combinations of `Category`, `Mode`, `RouteSharingPolicy`, and `CategoryOptions` are incompatible. `AVFoundation` documentation isn't very good, making it tricky to set the right combination of values for these properties.
On Android, you'll want request audio focus from AudioManager.
val audioManager = getSystemService(AUDIO_SERVICE) as AudioManagerwith(audioManager) {isSpeakerphoneOn = trueisMicrophoneMute = falsemode = AudioManager.MODE_IN_COMMUNICATION}val result = audioManager.requestAudioFocus(focusChangeListener,AudioManager.STREAM_VOICE_CALL,AudioManager.AUDIOFOCUS_GAIN,)
and after you are done, reset
with(audioManager) {isSpeakerphoneOn = falseisMicrophoneMute = trueabandonAudioFocus(focusChangeListener)mode = AudioManager.MODE_NORMAL}
Mute and unmute
You can mute any track to stop it from sending data to the server. When a track is muted, LiveKit will trigger a TrackMuted
event on all participants in the room. You can use this event to update your app's UI and reflect the correct state to all users in the room.
Mute/unmute a track using its corresponding LocalTrackPublication
object.
Video simulcast
With simulcast, a client publishes multiple versions of the same video track with varying bitrate profiles. This allows LiveKit to dynamically forward the stream that's most appropriate given each receiving participant's available bandwidth and desired resolution.
Adaptive layer selection takes place automatically in the SFU when the server detects a participant is bandwidth constrainted. When the participant's bandwidth improves, the server will then upgrade subscribed streams to higher resolutions.
For more information about Simulcast, see an introduction to WebRTC simulcast.
Simulcast is supported in all of LiveKit's client SDKs. It's enabled by default, and can be disabled in publish settings.
Dynamic broadcasting
LiveKit has end-to-end optimizations built-in to reduce bandwidth use. Dynamic broadcasting (dynacast) is an advanced feature that pauses publishing of video layers if they are not being consumed by a subscriber. It works for simulcasted video too: if subscribers are only consuming medium and low resolution streams, the publisher will pause publishing in high-res.
This feature can be enabled by setting dynacast: true
in Room options.