|
LiveKit C++ SDK
Real-time audio/video SDK for C++
|
WebRTC Audio Processing Module (APM) for real-time audio enhancement. More...
#include <audio_processing_module.h>
Classes | |
| struct | Options |
| Configuration options for the Audio Processing Module. More... | |
Public Member Functions | |
| AudioProcessingModule () | |
| Create a new Audio Processing Module with default options (all disabled). | |
| AudioProcessingModule (const Options &options) | |
| Create a new Audio Processing Module with the specified options. | |
| AudioProcessingModule (const AudioProcessingModule &)=delete | |
| AudioProcessingModule & | operator= (const AudioProcessingModule &)=delete |
| AudioProcessingModule (AudioProcessingModule &&) noexcept=default | |
| AudioProcessingModule & | operator= (AudioProcessingModule &&) noexcept=default |
| void | processStream (AudioFrame &frame) |
| Process the forward (near-end/microphone) audio stream. | |
| void | processReverseStream (AudioFrame &frame) |
| Process the reverse (far-end/speaker) audio stream. | |
| void | setStreamDelayMs (int delay_ms) |
| Set the estimated delay between the reverse and forward streams. | |
WebRTC Audio Processing Module (APM) for real-time audio enhancement.
AudioProcessingModule exposes WebRTC's built-in audio processing capabilities including echo cancellation, noise suppression, automatic gain control, and high-pass filtering.
This class is designed for scenarios where you need explicit control over audio processing, separate from the built-in processing in AudioSource.
Typical usage pattern for echo cancellation:
Note: Audio frames must be exactly 10ms in duration.
| livekit::AudioProcessingModule::AudioProcessingModule | ( | ) |
Create a new Audio Processing Module with default options (all disabled).
| std::runtime_error | if the APM could not be created. |
|
explicit |
Create a new Audio Processing Module with the specified options.
| options | Configuration for which processing features to enable. |
| std::runtime_error | if the APM could not be created. |
| void livekit::AudioProcessingModule::processReverseStream | ( | AudioFrame & | frame | ) |
Process the reverse (far-end/speaker) audio stream.
This method provides the reference signal for echo cancellation. Call this with the audio that is being played through the speakers, so the APM can learn the acoustic characteristics and remove the echo from the microphone signal.
The audio data is modified in-place.
| frame | The audio frame to process (modified in-place). |
| std::runtime_error | if processing fails. |
| void livekit::AudioProcessingModule::processStream | ( | AudioFrame & | frame | ) |
Process the forward (near-end/microphone) audio stream.
This method processes audio captured from the local microphone. It applies the enabled processing features (noise suppression, gain control, etc.) and removes echo based on the reference signal provided via processReverseStream().
The audio data is modified in-place.
| frame | The audio frame to process (modified in-place). |
| std::runtime_error | if processing fails. |
| void livekit::AudioProcessingModule::setStreamDelayMs | ( | int | delay_ms | ) |
Set the estimated delay between the reverse and forward streams.
This must be called if and only if echo processing is enabled.
Sets the delay in ms between processReverseStream() receiving a far-end frame and processStream() receiving a near-end frame containing the corresponding echo. On the client-side this can be expressed as:
delay = (t_render - t_analyze) + (t_process - t_capture)
where:
| delay_ms | Delay in milliseconds. |
| std::runtime_error | if setting the delay fails. |