Skip to content

Configuring Matrix RTC

See how to download example files from the helm chart here.

Configuration

For a quick setup using the default settings, see the minimal fragment example in charts/matrix-stack/ci/fragments/matrix-rtc-minimal.yaml.

Credentials

Credentials are generated if possible. Alternatively they can either be provided inline in the values with value or if you have an existing Secret in the cluster in the same namespace you can use secret andsecretKey to reference it.

If you don't want the chart to generate the secret, please refer to the following values fragments examples to see the secrets to configure.

Matrix RTC requires livekitAuth.secret secret:

  • charts/matrix-stack/ci/fragments/matrix-rtc-secrets-in-helm.yaml
  • charts/matrix-stack/ci/fragments/matrix-rtc-secrets-externally.yaml

SFU Networking

The matrix RTC SFU networking relies on NodePort by default. This means that the node but be reachable from outside of the cluster. Default ports are:

  • RTC TCP: 30881/TCP
  • RTC Muxed UDP: 30882/UDP

This can be configured using matrixRTC.sfu.exposedServices.

Stun discovery

The default SFU networking relies on STUN to discover its public IP. It will automatically advertise it to the clients. The STUN servers can be configured in LiveKit configuration using the additional section:

matrixRTC:
  sfu:
    additional: |
      rtc:
        stun_servers:
          - ip:port
          - ip:port
          - ...

Accessing from behind a Load Balancer

If you are behind a Load Balancer, you must forward the ports from the Load Balancer to the nodes. The ports must be the same on the Load Balancer and the nodes. In this situation, the SFU cannot discover the Load Balancer public IP using the STUN method. You must manually pass the IP that the SFU will advertise to the clients.

matrixRTC:
  sfu:
    useStunToDiscoverPublicIP: false
    manualIP: <the load balancer IP>

Additional SFU configuration

Additional Matrix RTC SFU configuration can be provided inline in the values as a string with:

matrixRTC:
  sfu:
    additional:
      ## Either reference config to inject by:
      1-custom-config:
        config: |
          admin_contact: "mailto:admin@example.com"
      ## Either reference an existing `Secret` by:
      2-custom-config:
        configSecret: custom-matrix-rtc-config
        configSecretKey: shared.yaml

Disabling Matrix RTC

Matrix RTC is enabled for deployment by default can be disabled with the following values

matrixRTC:
  enabled: false

Troubleshooting

Error Code: MISSING_MATRIX_RTC_FOCUS when setting up a call

Matrix RTC must be able to fetch details of where the SFU and authorisation services are hosted. This is achieved by making requests to the Matrix client well-known file at https://<server name>/.well-known/matrix/client. This must happen over a HTTPS connection and the browser must trust the TLS certificates presented for this connection.

  • Confirm that Matrix RTC isn't disabled in your deploy with matrixRTC.enabled: false (it is default enabled)
  • Confirm wellKnownDelegation isn't disabled in your deploy with wellKnownDelegation.enabled: false (it is default enabled)
  • Confirm the value of serverName is accessible over HTTPS and returns JSON: https://<server name>/.well-known/matrix/client
    • Confirm that the response body includes org.matrix.msc4143.rtc_foci
    • Confirm that the value of livekit_service_url is the value of matrixRTC.ingress.host with https:// prefixed
  • Confirm the value of matrixRTC.ingress.host is accessible over HTTPS and returns a HTTP 405: https://<matrixRTC.ingress.host>/sfu/get

If your serverName is accessible with public DNS over the internet you can use the Federation Tester tool to validate that it is accessible and has a generally trusted TLS certificate.

Matrix RTC authoriser logs say Failed to look up user info

The Matrix RTC authoriser must be able to query Matrix servers (including your own) to determine who is attempting to connect to it. It uses the Matrix Federation APIs for this.

As a result the authoriser Pod must be able to reach both the https://<server name>/.well-known/matrix/server endpoint and https://<synapse.ingress.host>/_matrix/federation/v1/openid/userinfo (amongst other things). Symptoms of problems in this area include logging in the authoriser Pod saying it can't connect to port 8448

  • Confirm wellKnownDelegation isn't disabled in your deploy with wellKnownDelegation.enabled: false (it is default enabled)
  • Confirm the value of serverName is accessible over HTTPS and returns JSON: https://<server name>/.well-known/matrix/server
    • Confirm that the value of m.server is the value of synapse.ingress.host with :443 suffixed
  • Confirm the value of synapse.ingress.host is accessible over HTTPS and returns JSON: https://<synapse.ingress.host>/_matrix/key/v2/server

If these all work outside of the cluster, it maybe that the Pods inside the cluster can't access them. You can tell the Matrix RTC authoriser to directly hit your ingress controller IP

matrixRTC:
  hostAliases:
  - hostnames:
    - ess.private
    - mrtc.ess.private
    - synapse.ess.private
    ip: '<the spec.clusterIP of your Ingress Controller's Service>'

If you are using a TLS certificate signed by a certificate authority that isn't in standard TLS trust stores (i.e. it is your own) you will either need to trust it or disable TLS verification in the Matrix RTC authoriser.

Trusting it:

certificateAuthorities:
- certificate: |
    -----BEGIN CERTIFICATE-----
    a PEM encoded CA certificate
    -----END CERTIFICATE-----
- secret: "existing-ca-secret"
  secretKey: "ca.crt"

Or by disabling TLS verification:

matrixRTC:
  extraEnv:
  - name: LIVEKIT_INSECURE_SKIP_VERIFY_TLS
    value: YES_I_KNOW_WHAT_I_AM_DOING

SFU Connectivity troubleshooting

  1. Device -> (DNS resolution of <SFU FQDN>) -> <HTTPS Requests/Websocket> --> cluster (Ingress host of <SFU FQDN>) to open the signaling channel
  2. Signaling channel is opened
  3. Device do a STUN resolution against the stun server configured in ElementCall. If not STUN server is configured, it fallbacks to google stun servers
  4. Device advertise its access IP through the signaling channel
  5. SFU advertise its IP through the signaling channel
  6. The connection uses UDP and tries to do Device Access IP <-> <Network> <-> IP Advertised by the SFU

SFU IP Advertise modes

You can configure how the SFU chooses the IP it advertises using 3 options:

  1. Host IP: The SFU will advertise the Host IP. (matrixRTC.sfu.useStunToDiscoverPublicIP: false, matrixRTC.sfu.manualIP undefined)
  2. Manual IP: You decide which IP the SFU should advertised (matrixRTC.sfu.useStunToDiscoverPublicIP: false, matrixRTC.sfu.manualIP: <ip> )
  3. STUN (matrixRTC.sfu.useStunToDiscoverPublicIP: true, matrixRTC.sfu.manualIP undefined)

Host IP

A direct IP route must exist between the devices and the Node IP. Therefore, the Node IP should be in a public network.

If you are working on Kubernetes deployment, you might want to have dedicated nodes with an IP in the public network. You can configure nodeSelectors and tolerations on the workloads to force the SFU to be started on this publicly accessible nodes.

Manual IP

The IP advertised by the SFU should be able to forward packets to the SFU pod.

This mode can be used if you are configuring your SFU behind a LoadBalancer. In this case, the device would sent its UDP packets to the Load Balancer, on the same ports as the one expected by the SFU. The Load Balancer will forward the packets to the SFU ports.

STUN

The SFU will determine its public IP using STUN. It needs a working STUN server, and by default will use Google stun servers.

This mode can be used if you are behind a NAT Gateway. Routing from the SFU to the STUN Server should traverse the NAT gateway, to successfully resolve the Access IP.

Device debugging tools

In ElementWeb

Make sure ElementCall is enabled in the room you are testing it. You can check under Room settings -> VoIP -> Enable ElementCall

Verify STUN resolution

On Mac

On mac, you can install the homebrew formula stuntman. Once installed, you can use stunclient to verify which IP the stun resolution discovers from a device: stunclient --protocol udp <coturn fqdn> <stun port>

The result should be the access IP that the SFU can request.

On Linux

On Linux, you can install the coturn package. Once installed, you can use turnutils_stunclient to verify which IP the stun resolution discovers from a device: turnutils_stunclient -p <stun port> <coturn fqdn>

On Windows

On Windows, you can install stuntman cygwin package. Once installed, you can use stunclient to verify which IP the stun resolution discovers from a device: stunclient --protocol udp <coturn fqdn> <stun port>

About:WebRTC

While Running an ElementCall session, you can open the WebRTC tools of Chrome (chrome://webrtc-internals) or Firefox (about:webrtc) to find which IPs are discovered, and which connections are failing.

For example, in the screenshot below, using chrome tools, you can find the following information:

  1. The blue arrow points to the stun server that the device uses to discover its IP. They are the ones configured into ElementCall.
  2. ICE Connection State says New -> Completed and worked correctly
  3. Connection state says New -> Connected and worked correctly
  4. The Signaling state says New -> Stable -> Have-Local-Offer -> Stable and worked correctly

The table contains the ICE Candidates pairs, and we can find the SFU Advertised IP as expected. The blue arrow points to the STUN Server that the clients will use to resolve their own IP address. It must be reachable from the clients.

About WebRTC screenshot

And example which would fail could show in the informations, where we can see that the connection sent through the signaling channel failed to open:

Signaling Failed

Verify connection

netcat can be used to test that the device is able to contact the SFU Pod:

 nc -vnzu 192.100.0.2 30882
Connection to 192.100.0.2 port 30882 [udp/*] succeeded!
 nc -vnz 192.100.0.2 30881
Connection to 192.100.0.2 port 30881 [tcp/*] succeeded!