Voice AI quickstart
To build your first voice AI app for Android, use the following quickstart and the starter app. Otherwise follow the getting started guide below.
Voice AI quickstart
Android Voice Assistant
Getting started guide
This guide is for Android apps using the traditional view-based system. If you are using Jetpack Compose, check out the Compose quickstart guide.
Install LiveKit SDK
LiveKit for Android is available as a Maven package.
...dependencies {implementation "io.livekit:livekit-android:<current version>"}
See the releases page for information on the latest version of the SDK.
You'll also need JitPack as one of your repositories. In your settings.gradle
file:
dependencyResolutionManagement {repositories {google()mavenCentral()//...maven { url 'https://jitpack.io' }}}
Permissions
LiveKit relies on the RECORD_AUDIO
and CAMERA
permissions to use the microphone and camera. These permission must be requested at runtime, like so:
private fun requestPermissions() {val requestPermissionLauncher =registerForActivityResult(ActivityResultContracts.RequestMultiplePermissions()) { grants ->for (grant in grants.entries) {if (!grant.value) {Toast.makeText(this,"Missing permission: ${grant.key}",Toast.LENGTH_SHORT).show()}}}val neededPermissions = listOf(Manifest.permission.RECORD_AUDIO, Manifest.permission.CAMERA).filter {ContextCompat.checkSelfPermission(this,it) == PackageManager.PERMISSION_DENIED}.toTypedArray()if (neededPermissions.isNotEmpty()) {requestPermissionLauncher.launch(neededPermissions)}}
Connect to LiveKit
Use the following code to connect and publish audio/video to a room, while rendering the video from other connected participants.
LiveKit uses SurfaceViewRenderer
to render video tracks. A TextureView
implementation is also provided through TextureViewRenderer
. Subscribed audio tracks are automatically played.
Note that this example hardcodes a token we generated for you that expires in 2 hours. In a real app, you’ll need your server to generate a token for you.
// !! Note !!// This sample hardcodes a token which expires in 2 hours.const val wsURL = "<your LiveKit server URL>"const val token = "<generate a token>"// In production you should generate tokens on your server, and your frontend// should request a token from your server.class MainActivity : AppCompatActivity() {lateinit var room: Roomoverride fun onCreate(savedInstanceState: Bundle?) {super.onCreate(savedInstanceState)setContentView(R.layout.activity_main)// Create Room object.room = LiveKit.create(applicationContext)// Setup the video rendererroom.initVideoRenderer(findViewById<SurfaceViewRenderer>(R.id.renderer))connectToRoom()}private fun connectToRoom() {lifecycleScope.launch {// Setup event handling.launch {room.events.collect { event ->when (event) {is RoomEvent.TrackSubscribed -> onTrackSubscribed(event)else -> {}}}}// Connect to server.room.connect(wsURL,token,)// Publish audio/video to the roomval localParticipant = room.localParticipantlocalParticipant.setMicrophoneEnabled(true)localParticipant.setCameraEnabled(true)}}private fun onTrackSubscribed(event: RoomEvent.TrackSubscribed) {val track = event.trackif (track is VideoTrack) {attachVideo(track)}}private fun attachVideo(videoTrack: VideoTrack) {videoTrack.addRenderer(findViewById<SurfaceViewRenderer>(R.id.renderer))findViewById<View>(R.id.progress).visibility = View.GONE}}
(For more details, you can reference the complete sample app.)
Next steps
The following resources are useful for getting started with LiveKit on Android.
Generating tokens
Guide to generating authentication tokens for your users.
Realtime media
Complete documentation for live video and audio tracks.
Realtime data
Send and receive realtime data between clients.
Android SDK
LiveKit Android SDK on GitHub.
Android components
LiveKit Android components on GitHub.
Android SDK reference
LiveKit Android SDK reference docs.
Android components reference
LiveKit Android components reference docs.