Deployment / Deploying the Egress Service
The Egress service uses redis messaging queues to load balance and communicate with your LiveKit server.
The Egress service takes a yaml config file:
# required fieldsapi_key: livekit server api key. LIVEKIT_API_KEY env can be used insteadapi_secret: livekit server api secret. LIVEKIT_API_SECRET env can be used insteadws_url: livekit server websocket url. LIVEKIT_WS_URL can be used insteadredis:address: must be the same redis address used by your livekit serverusername: redis usernamepassword: redis passworddb: redis db# optional fieldshealth_port: if used, will open an http port for health checksprometheus_port: port used to collect prometheus metrics. Used for autoscalinglog_level: debug, info, warn, or error (default info)template_base: can be used to host custom templates (default https://egress-composite.livekit.io/#)insecure: can be used to connect to an insecure websocket (default false)# file upload config - only one of the following. Can be overridden per-requests3:access_key: AWS_ACCESS_KEY_ID env can be used insteadsecret: AWS_SECRET_ACCESS_KEY env can be used insteadregion: AWS_DEFAULT_REGION env can be used insteadendpoint: optional custom endpointbucket: bucket to upload files toazure:account_name: AZURE_STORAGE_ACCOUNT env can be used insteadaccount_key: AZURE_STORAGE_KEY env can be used insteadcontainer_name: container to upload files togcp:credentials_json: GOOGLE_APPLICATION_CREDENTIALS env can be used insteadbucket: bucket to upload files to
The config file can be added to a mounted volume with its location passed in the EGRESS_CONFIG_FILE env var, or its body can be passed in the EGRESS_CONFIG_BODY env var.
These changes are not recommended for a production setup.
To run against a local livekit server, you'll need to do the following:
/usr/local/etc/redis.conf
and comment out the line that says bind 127.0.0.1
protected-mode yes
to protected-mode no
in the same filews_url
needs to be set using the IP as Docker sees it172.17.0.1
docker run -it --rm alpine nslookup host.docker.internal
and you should see something like
Name: host.docker.internal Address: 192.168.65.2
These changes allow the service to connect to your local redis instance from inside the docker container.
Create a directory to mount. In this example, we will use ~/livekit-egress
.
Create a config.yaml in the above directory.
redis
and ws_url
should use the above IP instead of localhost
insecure
should be set to truelog_level: debugapi_key: your-api-keyapi_secret: your-api-secretws_url: ws://192.168.65.2:7880insecure: trueredis:address: 192.168.65.2:6379
Then to run the service:
docker run --rm \-e EGRESS_CONFIG_FILE=/out/config.yaml \-v ~/livekit-egress:/out \livekit/egress
You can then use our cli to submit recording requests to your server.
If you already deployed the server using our Helm chart, jump to helm install
below.
Ensure Helm is installed on your machine.
Add the LiveKit repo
$ helm repo add livekit https://helm.livekit.io
Create a values.yaml for your deployment, using egress-sample.yaml as a template. Each instance can record one room at a time, so be sure to either enable autoscaling, or set replicaCount >= the number of rooms you'll need to simultaneously record.
Then install the chart
$ helm install <instance_name> livekit/egress --namespace <namespace> --values values.yaml
We'll publish new version of the chart with new egress releases. To fetch these updates and upgrade your installation, perform
$ helm repo update$ helm upgrade <instance_name> livekit/egress --namespace <namespace> --values values.yaml
Room Composite egress is limited to one per instance, and typically uses 2.5-3 CPUs. For this reason, it is recommended to use pods with 4 CPUs if you will be using room composite egress.
The livekit_egress_available
Prometheus metric is also provided to support autoscaling. prometheus_port
must be defined in your config.
With this metric, each instance looks at its own CPU utilization and decides whether it is available to accept incoming requests.
This can be more accurate than using average CPU or memory utilization, because requests are long-running and are resource intensive.
To keep at least 3 instances available:
sum(livekit_egress_available) > 3
To keep at least 30% of your egress instances available:
sum(livekit_egress_available)/sum(kube_pod_labels{label_project=~"^.*egress.*"}) > 0.3
There are 3 options for autoscaling: targetCPUUtilizationPercentage
, targetMemoryUtilizationPercentage
, and custom
.
autoscaling:enabled: falseminReplicas: 1maxReplicas: 5# targetCPUUtilizationPercentage: 60# targetMemoryUtilizationPercentage: 60# custom:# metricName: my_metric_name# targetAverageValue: 70
To use custom
, you'll need to install the prometheus adapter. You can then create a kubernetes custom metric based off the
livekit_egress_available
prometheus metric.
You can find an example on how to do this here.
Previous
Deploy to Kubernetes
Up Next
Ports and Firewall