The following guide uses the C++ SDK to build an example app that connects to a room and publishes audio and video tracks.
Install the C++ SDK
These instructions use the prebuilt SDK release. If you want to build the SDK from source, see the C++ SDK repository.
The C++ example collection in GitHub includes a CMake helper script, cmake/LiveKitSDK.cmake, that automatically downloads a prebuilt LiveKit C++ SDK release from GitHub during CMake configure.
Include the following in your app's
CMakeLists.txtfile:list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")include(LiveKitSDK)livekit_sdk_setup(VERSION "${LIVEKIT_SDK_VERSION}"SDK_DIR "${CMAKE_BINARY_DIR}/_deps/livekit-sdk"GITHUB_TOKEN "$ENV{GITHUB_TOKEN}")find_package(LiveKit CONFIG REQUIRED)# add_executable(example example.cpp) # example# target_link_libraries(example PRIVATE LiveKit::livekit)Copy the
cmake/LiveKitSDK.cmakehelper file into your project.Configure and build your project:
cmake -B build -DLIVEKIT_SDK_VERSION=latestcmake --build buildPinning SDK versionsPin SDK versions for reproducible builds by specifying a version like
-DLIVEKIT_SDK_VERSION=0.2.7instead oflatest.
Connecting to LiveKit
The following example connects to LiveKit using a hardcoded token set as an environment variable. It publishes audio and video tracks, and streams synthetic test data (a sine wave tone and animated color gradient).
For a production app, you need to make the following changes:
- Generate a token on your server and replace the hardcoded token with the generated token.
- Replace the synthetic test data generation with real camera and microphone capture. To learn more, see Permissions and entitlements.
Example code
Create a file named example.cpp with the following content:
#include "livekit/livekit.h"#include <iostream>#include <memory>#include <thread>#include <chrono>#include <cmath>#include <atomic>#include <csignal>std::atomic<bool> g_running{true};void signalHandler(int) {g_running = false;}// Generate a simple sine wave audio tone (440 Hz)void generateAudioThread(std::shared_ptr<livekit::AudioSource> audioSource) {const int sample_rate = 48000;const int channels = 1;const int samples_per_10ms = sample_rate / 100; // 10ms chunksconst double frequency = 440.0; // A4 notestd::vector<int16_t> buffer(samples_per_10ms * channels);size_t sample_index = 0;while (g_running) {// Generate sine wave samplesfor (int i = 0; i < samples_per_10ms; ++i) {double t = static_cast<double>(sample_index++) / sample_rate;double sample = std::sin(2.0 * M_PI * frequency * t);buffer[i] = static_cast<int16_t>(sample * 16000.0);}// Create AudioFrame and push itlivekit::AudioFrame frame(buffer, sample_rate, channels, samples_per_10ms);audioSource->captureFrame(frame);std::this_thread::sleep_for(std::chrono::milliseconds(10));}}// Generate animated color gradient videovoid generateVideoThread(std::shared_ptr<livekit::VideoSource> videoSource) {const int width = 1280;const int height = 720;const int fps = 30;int frame_count = 0;while (g_running) {// Create a VideoFrame with RGBA formatauto frame = livekit::VideoFrame::create(width, height, livekit::VideoBufferType::RGBA);// Generate a moving color gradientuint8_t hue = (frame_count * 2) % 256;uint8_t* buffer = frame.data();for (int y = 0; y < height; ++y) {for (int x = 0; x < width; ++x) {int idx = (y * width + x) * 4;buffer[idx + 0] = static_cast<uint8_t>((x * 255) / width); // Rbuffer[idx + 1] = static_cast<uint8_t>((y * 255) / height); // Gbuffer[idx + 2] = hue; // Bbuffer[idx + 3] = 255; // A}}// Push video framevideoSource->captureFrame(frame);frame_count++;std::this_thread::sleep_for(std::chrono::milliseconds(1000 / fps));}}int main() {// Get connection details from environment variablesconst char* url = std::getenv("LIVEKIT_URL");const char* token = std::getenv("LIVEKIT_TOKEN");if (!url || !token) {std::cerr << "Set LIVEKIT_URL and LIVEKIT_TOKEN environment variables\n";return 1;}// Setup signal handler for Ctrl+Cstd::signal(SIGINT, signalHandler);// Initialize LiveKit SDKlivekit::initialize(livekit::LogSink::kConsole);// Create and connect to roomauto room = std::make_unique<livekit::Room>();livekit::RoomOptions options;options.auto_subscribe = true;options.dynacast = false;if (!room->Connect(url, token, options)) {std::cerr << "Failed to connect\n";livekit::shutdown();return 1;}std::cout << "Connected to room\n";// Create and publish audio trackauto audioSource = std::make_shared<livekit::AudioSource>(48000, 1, 10);auto audioTrack = livekit::LocalAudioTrack::createLocalAudioTrack("audio", audioSource);livekit::TrackPublishOptions audioOpts;audioOpts.source = livekit::TrackSource::SOURCE_MICROPHONE;try {auto audioPub = room->localParticipant()->publishTrack(audioTrack, audioOpts);std::cout << "Published audio track: " << audioPub->sid() << "\n";} catch (const std::exception& e) {std::cerr << "Failed to publish audio: " << e.what() << "\n";return 1;}// Create and publish video trackauto videoSource = std::make_shared<livekit::VideoSource>(1280, 720);auto videoTrack = livekit::LocalVideoTrack::createLocalVideoTrack("video", videoSource);livekit::TrackPublishOptions videoOpts;videoOpts.source = livekit::TrackSource::SOURCE_CAMERA;try {auto videoPub = room->localParticipant()->publishTrack(videoTrack, videoOpts);std::cout << "Published video track: " << videoPub->sid() << "\n";} catch (const std::exception& e) {std::cerr << "Failed to publish video: " << e.what() << "\n";return 1;}// Start audio and video generation threadsstd::thread audioThread(generateAudioThread, audioSource);std::thread videoThread(generateVideoThread, videoSource);std::cout << "Streaming audio and video. Press Ctrl+C to stop...\n";// Wait for signalwhile (g_running) {std::this_thread::sleep_for(std::chrono::milliseconds(100));}// Stop capture threadsaudioThread.join();videoThread.join();// Cleanuproom.reset();livekit::shutdown();return 0;}
Running the example
Set environment variables with your LiveKit server URL and access token:
export LIVEKIT_URL=wss://your-livekit-server.comexport LIVEKIT_TOKEN=your-access-tokenYou need to generate tokens on your server for production use.
Run the example:
./build/exampleThe program connects to a LiveKit room and streams a 440 Hz sine wave audio tone and an animated color gradient video. Press
Ctrl+Cto stop.
This example generates synthetic audio and video data. For real camera/microphone capture, see the SimpleRoom example in the SDK repository, which uses SDL3 for device access.
Permissions and entitlements
For most C++ applications—such as desktop or headless environments—no operating system permissions prompts are required, unless your app needs to access physical devices like microphones or cameras.
Headless or synthetic sources
No special device permissions are required for headless or synthetic sources.
Microphone or camera capture
The following permissions are required for microphone or camera capture:
| Operating system | Required permissions |
|---|---|
| macOS: | Add NSMicrophoneUsageDescription and NSCameraUsageDescription to your app bundle if applicable. |
| Windows/Linux: | Permissions are typically managed by the OS or user session; ensure device access is available. |
Next steps
The following resources are useful for getting started with LiveKit on C++.
Generating tokens
Guide to generating authentication tokens for your users.
Realtime media
Complete documentation for live video and audio tracks.
Realtime data
Send and receive realtime data between clients.
C++ example collection
Small, self-contained examples that demonstrate how to connect, publish, and share data, etc.
C++ SDK
LiveKit C++ SDK on GitHub.
C++ SDK reference
LiveKit C++ SDK reference docs.