Maintenance
Upgrading
In order to upgrade your deployment, you should:
- Read the release notes of the new version and check if there are any breaking changes. The changelog is available on element matrix-stack chart page on the right panel.
- Adjust your values if necessary.
- Re-run the install command. It will upgrade your installation to the latest version of the chart.
Backup & restore
Backup
You need to backup a couple of things to be able to restore your deployment:
-
Stop Synapse and Matrix Authentication Service workloads:
-
The database. You need to backup your database and restore it on a new deployment.
- If you are using the provided Postgres database, build a dump using the command
kubectl exec --namespace ess -it sts/ess-postgres -- pg_dumpall -U postgres > dump.sql. Adjust to your own Kubernetes namespace and release name if required. - If you are using your own Postgres database, please build your backup according to your database documentation.
- If you are using the provided Postgres database, build a dump using the command
-
Your values files used to deploy the chart
- The chart will generate some secrets if you do not provide them. To copy them to a local file, you can run the following command:
kubectl get secrets -l "app.kubernetes.io/managed-by=matrix-tools-init-secrets" -n ess -o yaml > secrets.yaml. Adjust to your own Kubernetes namespace if required. - The media files: Synapse stores media in a persistent volume that should be backed up. On a default K3s setup, you can find where synapse media is stored on your node using the command
kubectl get pv -n ess -o yaml | grep synapse-media. - Run the
helm upgrade --install....command again to restore your workload's pods.
Restore
-
Recreate the namespace and the backed-up secret in step 3:
-
Redeploy the chart using the values backed-up in step 2.
-
Stop Synapse and Matrix Authentication Service workloads:
-
Restore the PostgreSQL dump. If you are using the provided PostgreSQL database, this can be achieved using the following commands:
# Drop newly created databases and roles kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP DATABASE matrixauthenticationservice' kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP DATABASE synapse' kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP ROLE synapse_user' kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP ROLE matrixauthenticationservice_user' kubectl cp dump.sql ess-postgres-0:/tmp -n ess kubectl exec -n ess sts/ess-postgres -- bash -c "psql -U postgres -d postgres < /tmp/dump.sql"Adjust to your own Kubernetes namespace and release name if required.
-
Restore the synapse media files using
kubectl cpto copy them in Synapse pod. If you are using K3s, you can find where the new persistent volume has been mounted withkubectl get pv -n ess -o yaml | grep synapse-mediaand copy your files in the destination path. - Run the
helm upgrade --install....command again to restore your workload's pods.
Fixing CVE-2026-24044/ELEMENTSEC-2025-1670 manually
If you initially deployed ESS Community with the chart secrets initialization hook enabled (initSecrets.enabled not set to false), your Synapse signing key will be vulnerable if it was not set explicitly in synapse.signingKey. If you later specified its content in synapse.signingKey in the values files, the chart will not be able to generate a new key automatically. You will be using the vulnerable signing key until you change it manually.
- Install
signedjsonandpyyamlusingpip:pip install signedjson pyyaml -
Generate your new signing key with the key id
ed25519:1using the following command: -
Specify this value as the new secret content under
synapse.signingKey: -
To invalidate the old signing key, you will have to construct Synapse
old_signing_keyconfiguration. Generate a throwaway verifying key using the key ided25519:0with the following command:$ python3 -c "import yaml; import time; import signedjson.key; signing_key = signedjson.key.generate_signing_key(0); revoke_time = int(time.time()*1000); result = {\"old_signing_keys\": {\"ed25519:0\": {\"key\": signedjson.key.encode_verify_key_base64(signing_key), \"expired_ts\": revoke_time}}}; print(f\"{yaml.dump(result)}\")" old_signing_keys: ed25519:0: expired_ts: 1770625043432 key: x1YFkPUwoKBnS69Yfxhpjc5Y8cd2nLPElJFdqCcJk4E -
Inject this in synapse additional settings in your values, under a new
synapse.additionalsection:synapse: additional: revoke_bad_signing_key.yml: config: | old_signing_keys: ed25519:0: key: <throwaway verifying key> expired_ts: <current ts>This will make sure that:
- The old key id ed25519:0 is not accepted any more by the federation, and because the verifying key has been randomly generated during revocation, the old key signatures are all invalid.
- The new key ed25519:1 is accepted by the federation
-
Apply the new values using
helmand wait for Synapse to be restarted. Run the following command to check that the new signing keyed25519:1is now advertised properly by Synapse, and the old key ided25519:0is marked as revoked:curl -s https://<your synapse host>/_matrix/key/v2/server | jq { "old_verify_keys": { "ed25519:0": { "expired_ts": 1769001790846, "key": "tt+JkcqGzTxt..." } }, "server_name": "<your server name>", "signatures": { "<your server name>": { "ed25519:1": "gahd4eeGh..." } }, "valid_until_ts": ..., "verify_keys": { "ed25519:1": { "key": "BUIaPW..." } } }