Overview
LiveKit includes a simple and consistent method to publish the user's camera and microphone, regardless of the device or browser they are using. In all cases, LiveKit displays the correct indicators when recording is active and acquires the necessary permissions from the user.
// Enables the camera and publishes it to a new video trackroom.localParticipant.setCameraEnabled(true);// Enables the microphone and publishes it to a new audio trackroom.localParticipant.setMicrophoneEnabled(true);
Device permissions
In native and mobile apps, you typically need to acquire consent from the user to access the microphone or camera. LiveKit integrates with the system privacy settings to record permission and display the correct indicators when audio or video capture is active.
For web browsers, the user is automatically prompted to grant camera and microphone permissions the first time your app attempts to access them and no additional configuration is required.
Add these entries to your Info.plist
:
<key>NSCameraUsageDescription</key><string>$(PRODUCT_NAME) uses your camera</string><key>NSMicrophoneUsageDescription</key><string>$(PRODUCT_NAME) uses your microphone</string>
To enable background audio, you must also add the "Background Modes" capability with "Audio, AirPlay, and Picture in Picture" selected.
Your Info.plist
should have:
<key>UIBackgroundModes</key><array><string>audio</string></array>
Mute and unmute
You can mute any track to stop it from sending data to the server. When a track is muted, LiveKit will trigger a TrackMuted
event on all participants in the room. You can use this event to update your app's UI and reflect the correct state to all users in the room.
Mute/unmute a track using its corresponding LocalTrackPublication
object.
Track permissions
By default, any published track can be subscribed to by all participants. However, publishers can restrict who can subscribe to their tracks using Track Subscription Permissions:
localParticipant.setTrackSubscriptionPermissions(allParticipantsAllowed: false,trackPermissions: [ParticipantTrackPermission(participantSid: "allowed-sid", allTracksAllowed: true)])
Publishing from backend
You may also publish audio and video tracks from a backend process, which can be consumed just like any camera or microphone track. The LiveKit Agents framework makes it easy to add a programmable participant to any room, and publish media such as synthesized speech or video.
LiveKit also includes complete SDKs for server environments in Go, Rust, Python, and Node.js.
You can also publish media using the LiveKit CLI.
Publishing audio tracks
You can publish audio by creating an AudioSource
and publishing it as a track.
Audio streams carry raw PCM data at a specified sample rate and channel count. Publishing audio involves splitting the stream into audio frames of a configurable length. An internal buffer holds 50 ms of queued audio to send to the realtime stack. The capture_frame
method, used to send new frames, is blocking and doesn't return control until the buffer has taken in the entire frame. This allows for easier interruption handling.
In order to publish an audio track, you need to determine the sample rate and number of channels beforehand, as well as the length (number of samples) of each frame. In the following example, the agent transmits a constant 16-bit sine wave at 48kHz in 10 ms long frames:
import numpy as npfrom livekit import agents,rtcSAMPLE_RATE = 48000NUM_CHANNELS = 1 # mono audioAMPLITUDE = 2 ** 8 - 1SAMPLES_PER_CHANNEL = 480 # 10 ms at 48kHzasync def entrypoint(ctx: agents.JobContext):await ctx.connect()source = rtc.AudioSource(SAMPLE_RATE, NUM_CHANNELS)track = rtc.LocalAudioTrack.create_audio_track("example-track", source)# since the agent is a participant, our audio I/O is its "microphone"options = rtc.TrackPublishOptions(source=rtc.TrackSource.SOURCE_MICROPHONE)# ctx.agent is an alias for ctx.room.local_participantpublication = await ctx.agent.publish_track(track, options)frequency = 440async def _sinewave():audio_frame = rtc.AudioFrame.create(SAMPLE_RATE, NUM_CHANNELS, SAMPLES_PER_CHANNEL)audio_data = np.frombuffer(audio_frame.data, dtype=np.int16)time = np.arange(SAMPLES_PER_CHANNEL) / SAMPLE_RATEtotal_samples = 0while True:time = (total_samples + np.arange(SAMPLES_PER_CHANNEL)) / SAMPLE_RATEsinewave = (AMPLITUDE * np.sin(2 * np.pi * frequency * time)).astype(np.int16)np.copyto(audio_data, sinewave)# send this frame to the trackawait source.capture_frame(audio_frame)total_samples += SAMPLES_PER_CHANNELawait _sinewave()
When streaming finite audio (for example, from a file), make sure the frame length isn't longer than the number of samples left to stream, otherwise the end of the buffer consists of noise.
Audio examples
For audio examples using the LiveKit SDK, see the following in the GitHub repository:
Manipulate TTS audio
Use the TTS node to speed up audio output.
Echo agent
Echo user audio back to them.
Sync TTS transcription
Uses manual subscription, transcription forwarding, and manually publishes audio output.
Publishing video tracks
Agents publish data to their tracks as a continuous live feed. Video streams can transmit data in any of 11 buffer encodings. When publishing video tracks, you need to establish the frame rate and buffer encoding of the video beforehand.
In this example, the agent connects to the room and starts publishing a solid color frame at 10 frames per second (FPS). Copy the following code into your entrypoint
function:
from livekit import rtcfrom livekit.agents import JobContextWIDTH = 640HEIGHT = 480source = rtc.VideoSource(WIDTH, HEIGHT)track = rtc.LocalVideoTrack.create_video_track("example-track", source)options = rtc.TrackPublishOptions(# since the agent is a participant, our video I/O is its "camera"source=rtc.TrackSource.SOURCE_CAMERA,simulcast=True,# when modifying encoding options, max_framerate and max_bitrate must both be setvideo_encoding=rtc.VideoEncoding(max_framerate=30,max_bitrate=3_000_000,),video_codec=rtc.VideoCodec.H264,)publication = await ctx.agent.publish_track(track, options)# this color is encoded as ARGB. when passed to VideoFrame it gets re-encoded.COLOR = [255, 255, 0, 0]; # FFFF0000 REDasync def _draw_color():argb_frame = bytearray(WIDTH * HEIGHT * 4)while True:await asyncio.sleep(0.1) # 10 fpsargb_frame[:] = COLOR * WIDTH * HEIGHTframe = rtc.VideoFrame(WIDTH, HEIGHT, rtc.VideoBufferType.RGBA, argb_frame)# send this frame to the tracksource.capture_frame(frame)asyncio.create_task(_draw_color())
- Although the published frame is static, it's still necessary to stream it continuously for the benefit of participants joining the room after the initial frame is sent.
- Unlike audio, video
capture_frame
doesn't keep an internal buffer.
LiveKit can translate between video buffer encodings automatically. VideoFrame
provides the current video buffer type and a method to convert it to any of the other encodings:
async def handle_video(track: rtc.Track):video_stream = rtc.VideoStream(track)async for event in video_stream:video_frame = event.framecurrent_type = video_frame.typeframe_as_bgra = video_frame.convert(rtc.VideoBufferType.BGRA)# [...]await video_stream.aclose()@ctx.room.on("track_subscribed")def on_track_subscribed(track: rtc.Track,publication: rtc.TrackPublication,participant: rtc.RemoteParticipant,):if track.kind == rtc.TrackKind.KIND_VIDEO:asyncio.create_task(handle_video(track))
Audio and video synchronization
AVSynchronizer
is currently only available in Python.
While WebRTC handles A/V sync natively, some scenarios require manual synchronization - for example, when synchronizing generated video with voice output.
The AVSynchronizer
utility helps maintain synchronization by aligning the first audio and video frames. Subsequent frames are automatically synchronized based on configured video FPS and audio sample rate.
Audio and video synchronization
Examples that demonstrate how to synchronize video and audio streams using the AVSynchronizer
utility.