Skip to main content

Data tracks

Use data tracks to send low-latency, high-bandwidth data between participants.

Overview

Data tracks provide low-latency, lossy transport for continuous data between participants in a LiveKit room. They're designed for use cases where staying realtime matters more than guaranteed delivery, such as streaming sensor readings, robot teleoperation commands, or realtime telemetry.

Data tracks prioritize realtime delivery, where each frame is sent once with no retransmission. Frames are reordered on the subscriber side, so they're delivered in the order they're published. For low-level control over individual packet delivery, use data packets.

Data tracks are more lightweight than media tracks to publish and subscribe to. There's no codec or processing overhead, so you can publish many data tracks per participant, such as one track per sensor or actuator. Once published, a data track is visible to all participants in the room, including those who connect after the track is published.

Data tracks support end-to-end encryption. If E2EE is enabled for the room, data track frames are encrypted and decrypted automatically. Data tracks are also automatically re-published and re-subscribed to after a reconnection.

Publishing data tracks

A participant must have the canPublishData grant to publish data tracks.

A participant publishes a data track by providing a name. The name must be 1–256 characters and unique among that participant's published data tracks. After publishing, the participant receives a local data track object that can be used to push frames. LiveKit Server selectively forwards frames only to participants that subscribe, so bandwidth isn't wasted broadcasting to uninterested consumers.

const track = await room.localParticipant.publishDataTrack({
name: 'my_sensor_data',
});
// Push data using the returned LocalDataTrack
const payload = new Uint8Array(256).fill(0xFA);
track.tryPush({ payload });
track = await room
.local_participant
.publish_data_track(name="my_sensor_data")
payload = bytes([0xFA] * 256)
track.try_push(rtc.DataTrackFrame(payload=payload))
let track = room
.local_participant()
.publish_data_track("my_sensor_data")
.await?;
track.try_push(DataTrackFrame::new(vec![0xFA; 256]))?;
auto publish_result =
room->localParticipant()->publishDataTrack("my_sensor_data");
if (!publish_result) {
const auto &error = publish_result.error();
LK_LOG_ERROR("Failed to publish data track: code={}message={}",
static_cast<std::uint32_t>(error.code), error.message);
return;
}
std::shared_ptr<livekit::LocalDataTrack> track = publish_result.value();
livekit::DataTrackFrame frame;
frame.payload = std::vector<std::uint8_t>(256, 0xFA);
auto push_result = track->tryPush(frame);
if (!push_result) {
const auto &error = push_result.error();
LK_LOG_ERROR("Failed to push data frame: code={} message={}",
static_cast<std::uint32_t>(error.code), error.message);
}
var publishInstruction = room.LocalParticipant.PublishDataTrack("my_sensor_data");
yield return publishInstruction;
if (publishInstruction.IsError) {
Debug.LogError($"Failed to publish track: {publishInstruction.Error}");
yield break;
}
var track = publishInstruction.Track;
var payload = new byte[256];
Array.Fill(payload, (byte)0xFA);
var frame = new DataTrackFrame(payload);
try {
track.TryPush(frame);
} catch (PushFrameError e) {
Debug.LogError($"Failed to push frame! {e.Message}");
}

When a data track is no longer needed, unpublish it to notify other participants and release resources:

await track.unpublish();
track.unpublish()
track.unpublish();
track->unpublishDataTrack();
track.Unpublish();

User timestamps

Each frame can carry an optional 64-bit user timestamp, which is an application-defined value set by the publisher. This is useful for measuring end-to-end latency and correlating frames with real-world events, which is especially important for robotics and telemetry use cases. In embedded applications, the timestamp can reflect when a sensor actually sampled the value rather than when the frame was sent.

Set the timestamp when pushing a frame on the publisher side:

track.tryPush({
payload: sensorData,
userTimestamp: BigInt(Date.now()),
});
frame = rtc.DataTrackFrame(
payload=sensor_data,
user_timestamp=int(time.time() * 1000),
)
track.try_push(frame)
let frame = DataTrackFrame::new(sensor_data).with_user_timestamp_now();
track.try_push(frame)?;
auto user_timestamp = static_cast<std::uint64_t>(
std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::system_clock::now().time_since_epoch())
.count());
livekit::DataTrackFrame frame;
frame.payload = sensor_data;
frame.user_timestamp = user_timestamp;
auto push_result = track->tryPush(frame);
if (!push_result) {
const auto &error = push_result.error();
LK_LOG_ERROR("Failed to push data frame: code={} message={}",
static_cast<std::uint32_t>(error.code), error.message);
}
var frame = new DataTrackFrame(sensor_data)
.WithUserTimestampNow();
track.TryPush(frame);

On the subscriber side, read the timestamp from the received frame to calculate latency:

for await (const frame of stream) {
if (frame.userTimestamp) {
const latencyMs = Date.now() - Number(frame.userTimestamp);
console.log(`Latency: ${latencyMs}ms`);
}
}
async for frame in subscription:
if frame.user_timestamp is not None:
latency_ms = int(time.time() * 1000) - frame.user_timestamp
print(f"Latency: {latency_ms}ms")
while let Some(frame) = stream.next().await {
if let Some(latency) = frame.duration_since_timestamp() {
println!("Latency: {:?}", latency);
}
}
const auto callback_id = room->addOnDataFrameCallback(
"sensor-publisher", "my_sensor_data",
[](const std::vector<std::uint8_t> & /*payload*/,
std::optional<std::uint64_t> user_timestamp) {
if (!user_timestamp) {
return;
}
const auto now_us = static_cast<std::uint64_t>(
std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::system_clock::now().time_since_epoch())
.count());
LK_LOG_INFO("Latency: {}us", now_us - *user_timestamp);
});
// Later, when you no longer want frames:
room->removeOnDataFrameCallback(callback_id);
while (!subscription.IsEos) {
var frameInstruction = subscription.ReadFrame();
yield return frameInstruction;
if (frameInstruction.IsCurrentReadDone) {
var frame = frameInstruction.Frame;
Debug.Log($"Latency: {frame.DurationSinceTimestamp()}ms");
}
}
Timestamp considerations

User timestamps rely on synchronized clocks between publisher and subscriber. The calculated latency is only as accurate as the clock synchronization between the two participants. For application-specific round-trip metrics, consider storing timestamps directly in the payload and matching request and response IDs.

Handling push errors

tryPush is a non-blocking call that can fail in two cases:

  • Track unpublished: The track has been unpublished or the room has disconnected.
  • Frame dropped: The outgoing buffer is full and the frame can't be queued.

Because data tracks use lossy delivery, occasional dropped frames are expected under high load. Design your application to tolerate gaps rather than treating every drop as an error.

Choosing a frame size

Frames can be any size, but larger frames are split into multiple WebRTC data channel packets for transmission. Because data tracks use lossy delivery, if any packet in a multi-packet frame is lost, the entire frame is lost. Smaller frames are more resilient to packet loss.

For best reliability, keep frame payloads under 1200 bytes to fit within a single data channel packet. If your data is larger, consider whether occasional frame loss is acceptable for your use case.

Subscribing to data tracks

Any participant in a room can subscribe to data tracks published by other participants. Subscribe to a remote data track to receive its frames as they arrive.

Note

Register room event handlers before calling room.connect(). Events like DataTrackPublished can fire during the connection handshake, and handlers registered afterward miss them.

Listening for published tracks

When a remote participant publishes a data track, your client is notified through a room event. In some SDKs, you can also query existing data track publications on a remote participant directly.

import { RoomEvent } from 'livekit-client';
room.on(RoomEvent.DataTrackPublished, (track) => {
console.log(`${track.publisherIdentity} published "${track.info.name}"`);
});
room.on(RoomEvent.DataTrackUnpublished, (sid) => {
console.log(`Data track ${sid} was unpublished`);
});

Each remote participant also exposes a dataTracks map you can use to look up tracks by name:

// Get a track that's already published
const track = remoteParticipant.dataTracks.get('my_sensor_data');
// Or wait for it to be published
const track = await remoteParticipant.dataTracks.getDeferred('my_sensor_data');
@room.on("data_track_published")
def on_data_track_published(track: rtc.RemoteDataTrack):
print(f"{track.publisher_identity} published '{track.info.name}'")
@room.on("data_track_unpublished")
def on_data_track_unpublished(sid: str):
print(f"Data track {sid} was unpublished")
while let Some(event) = rx.recv().await {
match event {
RoomEvent::DataTrackPublished(track) => {
println!(
"{} published '{}'",
track.publisher_identity(),
track.info().name()
);
}
RoomEvent::DataTrackUnpublished(sid) => {
println!("Data track {sid} was unpublished");
}
_ => {}
}
}
class DataTrackRoomDelegate : public livekit::RoomDelegate {
public:
void onDataTrackPublished(
livekit::Room &, const livekit::DataTrackPublishedEvent &event) override {
LK_LOG_INFO("{} published '{}'", event.track->publisherIdentity(),
event.track->info().name);
}
void onDataTrackUnpublished(
livekit::Room &,
const livekit::DataTrackUnpublishedEvent &event) override {
LK_LOG_INFO("Data track {} was unpublished", event.sid);
}
};
DataTrackRoomDelegate delegate;
room->setDelegate(&delegate);
room.DataTrackPublished += (track) => {
Debug.Log($"{track.PublisherIdentity} published '{track.Info.Name}'");
};
room.DataTrackUnpublished += (sid) => {
Debug.Log($"Data track {sid} was unpublished");
};

Reading frames

Once you have a reference to a RemoteDataTrack, call subscribe() to begin receiving frames. This returns a stream that yields DataTrackFrame objects. In C++, you can also register a room-level callback keyed by publisher identity and track name. The ping_pong example uses this callback-based approach.

const stream = track.subscribe();
for await (const frame of stream) {
console.log('Received frame:', frame.payload);
}
// Assuming `track` is a `RemoteDataTrack`:
useEffect(() => {
const controller = new AbortController();
const stream = track.subscribe({ signal: controller.signal });
(async () => {
for await (const frame of stream) {
console.log('Received frame:', frame.payload);
}
})();
return () => {
controller.abort();
};
}, [track]);
stream = track.subscribe()
async for frame in stream:
print(f"Received frame: {frame.payload}")
let mut stream = track.subscribe().await?;
while let Some(frame) = stream.next().await {
println!("Received frame: {:?}", frame.payload());
}

Dropping the stream closes that subscription. If no other subscriptions remain on the same track, the underlying connection to the server is also closed.

const auto callback_id = room->addOnDataFrameCallback(
"sensor-publisher", "my_sensor_data",
[](const std::vector<std::uint8_t> &payload,
std::optional<std::uint64_t> /* user_timestamp */) {
std::string payload_str(payload.begin(), payload.end());
LK_LOG_INFO("Received frame: {}", payload_str);
});
// Later, when you no longer want frames:
room->removeOnDataFrameCallback(callback_id);
var stream = track.Subscribe();
while (!stream.IsEos) {
var frameInstruction = stream.ReadFrame();
yield return frameInstruction;
if (frameInstruction.IsCurrentReadDone) {
var frame = frameInstruction.Frame;
Debug.Log($"Received frame: {BitConverter.ToString(frame.Payload)}");
}
}

Handling multiple subscriptions

You can call subscribe() more than once on the same track to fan out frames to multiple consumers. For example, one task could log data while another renders it. Internally, only the first call triggers server signaling, while subsequent calls reuse the existing subscription pipeline.

New subscriptions only receive frames published after the subscription is established.

Configuring buffer size

Each data track subscription independently maintains an internal buffer of frames. When frames arrive faster than they're consumed, the buffer fills up and additional frames are dropped. In C++, this option applies to RemoteDataTrack::subscribe(...).

Choosing the right buffer size depends on your use case:

  • A buffer that's too small drops frames frequently, even during brief processing pauses. This can cause gaps in sensor data or missed commands.
  • A buffer that's too large allows memory usage to grow without limit if the consumer can't keep up. This is especially dangerous for long-running applications on lower-memory devices like robots or IoT hardware.

The default buffer size is 16 frames. This is reasonable for low- or moderate-frequency data, but for high-frequency use cases (hundreds of frames per second or more) it's likely not sufficient. Measure your publisher's data rate and your subscriber's consumption rate under realistic conditions, then choose a buffer size that absorbs normal jitter without growing indefinitely.

const stream = track.subscribe({ bufferSize: 64 });
stream = track.subscribe(buffer_size=64)
let mut stream = track
.subscribe_with_options(DataTrackSubscribeOptions::new().with_buffer_size(64))
.await?;
auto sub_result = track->subscribe({ .buffer_size = 64 });
if (!sub_result) {
const auto &error = sub_result.error();
LK_LOG_ERROR("Subscribe failed: code={} message={}",
static_cast<std::uint32_t>(error.code), error.message);
return;
}
std::shared_ptr<livekit::DataTrackStream> stream =
sub_result.value();
var options = new DataTrackSubscribeOptions { BufferSize = 64 };
var stream = track.Subscribe(options);