Skip to main content

C++ quickstart

Get started with LiveKit using the C++ SDK.

The following guide uses the C++ SDK to build an example app that connects to a room and publishes audio and video tracks.

Install the C++ SDK

These instructions use the prebuilt SDK release. If you want to build the SDK from source, see the C++ SDK repository.

The C++ example collection in GitHub includes a CMake helper script, cmake/LiveKitSDK.cmake, that automatically downloads a prebuilt LiveKit C++ SDK release from GitHub during CMake configure.

  1. Include the following in your app's CMakeLists.txt file:

    list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
    include(LiveKitSDK)
    livekit_sdk_setup(
    VERSION "${LIVEKIT_SDK_VERSION}"
    SDK_DIR "${CMAKE_BINARY_DIR}/_deps/livekit-sdk"
    GITHUB_TOKEN "$ENV{GITHUB_TOKEN}"
    )
    find_package(LiveKit CONFIG REQUIRED)
    # add_executable(example example.cpp) # example
    # target_link_libraries(example PRIVATE LiveKit::livekit)
  2. Copy the cmake/LiveKitSDK.cmake helper file into your project.

  3. Configure and build your project:

    cmake -B build -DLIVEKIT_SDK_VERSION=latest
    cmake --build build
    Pinning SDK versions

    Pin SDK versions for reproducible builds by specifying a version like -DLIVEKIT_SDK_VERSION=0.2.7 instead of latest.

Connecting to LiveKit

The following example connects to LiveKit using a hardcoded token set as an environment variable. It publishes audio and video tracks, and streams synthetic test data (a sine wave tone and animated color gradient).

For a production app, you need to make the following changes:

  • Generate a token on your server and replace the hardcoded token with the generated token.
  • Replace the synthetic test data generation with real camera and microphone capture. To learn more, see Permissions and entitlements.

Example code

Create a file named example.cpp with the following content:

#include "livekit/livekit.h"
#include <iostream>
#include <memory>
#include <thread>
#include <chrono>
#include <cmath>
#include <atomic>
#include <csignal>
std::atomic<bool> g_running{true};
void signalHandler(int) {
g_running = false;
}
// Generate a simple sine wave audio tone (440 Hz)
void generateAudioThread(std::shared_ptr<livekit::AudioSource> audioSource) {
const int sample_rate = 48000;
const int channels = 1;
const int samples_per_10ms = sample_rate / 100; // 10ms chunks
const double frequency = 440.0; // A4 note
std::vector<int16_t> buffer(samples_per_10ms * channels);
size_t sample_index = 0;
while (g_running) {
// Generate sine wave samples
for (int i = 0; i < samples_per_10ms; ++i) {
double t = static_cast<double>(sample_index++) / sample_rate;
double sample = std::sin(2.0 * M_PI * frequency * t);
buffer[i] = static_cast<int16_t>(sample * 16000.0);
}
// Create AudioFrame and push it
livekit::AudioFrame frame(buffer, sample_rate, channels, samples_per_10ms);
audioSource->captureFrame(frame);
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
// Generate animated color gradient video
void generateVideoThread(std::shared_ptr<livekit::VideoSource> videoSource) {
const int width = 1280;
const int height = 720;
const int fps = 30;
int frame_count = 0;
while (g_running) {
// Create a VideoFrame with RGBA format
auto frame = livekit::VideoFrame::create(width, height, livekit::VideoBufferType::RGBA);
// Generate a moving color gradient
uint8_t hue = (frame_count * 2) % 256;
uint8_t* buffer = frame.data();
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
int idx = (y * width + x) * 4;
buffer[idx + 0] = static_cast<uint8_t>((x * 255) / width); // R
buffer[idx + 1] = static_cast<uint8_t>((y * 255) / height); // G
buffer[idx + 2] = hue; // B
buffer[idx + 3] = 255; // A
}
}
// Push video frame
videoSource->captureFrame(frame);
frame_count++;
std::this_thread::sleep_for(std::chrono::milliseconds(1000 / fps));
}
}
int main() {
// Get connection details from environment variables
const char* url = std::getenv("LIVEKIT_URL");
const char* token = std::getenv("LIVEKIT_TOKEN");
if (!url || !token) {
std::cerr << "Set LIVEKIT_URL and LIVEKIT_TOKEN environment variables\n";
return 1;
}
// Setup signal handler for Ctrl+C
std::signal(SIGINT, signalHandler);
// Initialize LiveKit SDK
livekit::initialize(livekit::LogSink::kConsole);
// Create and connect to room
auto room = std::make_unique<livekit::Room>();
livekit::RoomOptions options;
options.auto_subscribe = true;
options.dynacast = false;
if (!room->Connect(url, token, options)) {
std::cerr << "Failed to connect\n";
livekit::shutdown();
return 1;
}
std::cout << "Connected to room\n";
// Create and publish audio track
auto audioSource = std::make_shared<livekit::AudioSource>(48000, 1, 10);
auto audioTrack = livekit::LocalAudioTrack::createLocalAudioTrack("audio", audioSource);
livekit::TrackPublishOptions audioOpts;
audioOpts.source = livekit::TrackSource::SOURCE_MICROPHONE;
try {
auto audioPub = room->localParticipant()->publishTrack(audioTrack, audioOpts);
std::cout << "Published audio track: " << audioPub->sid() << "\n";
} catch (const std::exception& e) {
std::cerr << "Failed to publish audio: " << e.what() << "\n";
return 1;
}
// Create and publish video track
auto videoSource = std::make_shared<livekit::VideoSource>(1280, 720);
auto videoTrack = livekit::LocalVideoTrack::createLocalVideoTrack("video", videoSource);
livekit::TrackPublishOptions videoOpts;
videoOpts.source = livekit::TrackSource::SOURCE_CAMERA;
try {
auto videoPub = room->localParticipant()->publishTrack(videoTrack, videoOpts);
std::cout << "Published video track: " << videoPub->sid() << "\n";
} catch (const std::exception& e) {
std::cerr << "Failed to publish video: " << e.what() << "\n";
return 1;
}
// Start audio and video generation threads
std::thread audioThread(generateAudioThread, audioSource);
std::thread videoThread(generateVideoThread, videoSource);
std::cout << "Streaming audio and video. Press Ctrl+C to stop...\n";
// Wait for signal
while (g_running) {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
// Stop capture threads
audioThread.join();
videoThread.join();
// Cleanup
room.reset();
livekit::shutdown();
return 0;
}

Running the example

  1. Set environment variables with your LiveKit server URL and access token:

    export LIVEKIT_URL=wss://your-livekit-server.com
    export LIVEKIT_TOKEN=your-access-token

    You need to generate tokens on your server for production use.

  2. Run the example:

    ./build/example

    The program connects to a LiveKit room and streams a 440 Hz sine wave audio tone and an animated color gradient video. Press Ctrl+C to stop.

Capturing real media

This example generates synthetic audio and video data. For real camera/microphone capture, see the SimpleRoom example in the SDK repository, which uses SDL3 for device access.

Permissions and entitlements

For most C++ applications—such as desktop or headless environments—no operating system permissions prompts are required, unless your app needs to access physical devices like microphones or cameras.

Headless or synthetic sources

No special device permissions are required for headless or synthetic sources.

Microphone or camera capture

The following permissions are required for microphone or camera capture:

Operating systemRequired permissions
macOS:Add NSMicrophoneUsageDescription and NSCameraUsageDescription to your app bundle if applicable.
Windows/Linux:Permissions are typically managed by the OS or user session; ensure device access is available.

Next steps

The following resources are useful for getting started with LiveKit on C++.