Add support for Valkey and recommend it instead of Redis & KeyDB

Related to https://github.com/mother-of-all-self-hosting/mash-playbook/issues/247
This commit is contained in:
Slavi Pantaleev 2024-11-23 11:33:16 +02:00
parent 1154b50555
commit 014d56958f
18 changed files with 371 additions and 284 deletions

View file

@ -4,11 +4,11 @@ The way this playbook is structured, each Ansible role can only be invoked once
If you need multiple instances (of whichever service), you'll need some workarounds as described below.
The example below focuses on hosting multiple [KeyDB](services/keydb.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.
The example below focuses on hosting multiple [Valkey](services/valkey.md) instances, but you can apply it to hosting multiple instances or whole stacks of any kind.
Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [KeyDB](services/keydb.md) instance. If you simply add `keydb_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a KeyDB instance (`mash-keydb`), but it's just one instance. As described in our [KeyDB](services/keydb.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.
Let's say you're managing a host called `mash.example.com` which installs both [PeerTube](services/peertube.md) and [NetBox](services/netbox.md). Both of these services require a [Valkey](services/valkey.md) instance. If you simply add `valkey_enabled: true` to your `mash.example.com` host's `vars.yml` file, you'd get a Valkey instance (`mash-valkey`), but it's just one instance. As described in our [Valkey](services/valkey.md) documentation, this is a security problem and potentially fragile as both services may try to read/write the same data and get in conflict with one another.
We propose that you **don't** add `keydb_enabled: true` to your main `mash.example.com` file, but do the following:
We propose that you **don't** add `valkey_enabled: true` to your main `mash.example.com` file, but do the following:
## Re-do your inventory to add supplementary hosts
@ -40,7 +40,7 @@ When running Ansible commands later on, you can use the `-l` flag to limit which
## Adjust the configuration of the supplementary hosts to use a new "namespace"
Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/keydb`) and service/container names (`mash-keydb`) everywhere.
Multiple hosts targetting the same server as described above still causes conflicts, because services will use the same paths (e.g. `/mash/valkey`) and service/container names (`mash-valkey`) everywhere.
To avoid conflicts, adjust the `vars.yml` file for the new hosts (`mash.example.com-netbox-deps` and `mash.example.com-peertube-deps`)
and set non-default and unique values in the `mash_playbook_service_identifier_prefix` and `mash_playbook_service_base_directory_name_prefix` variables. Examples below:
@ -73,15 +73,15 @@ mash_playbook_service_base_directory_name_prefix: 'netbox-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
@ -114,30 +114,30 @@ mash_playbook_service_base_directory_name_prefix: 'peertube-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
The above configuration will create **2** KeyDB instances:
The above configuration will create **2** Valkey instances:
- `mash-netbox-keydb` with its base data path in `/mash/netbox-keydb`
- `mash-peertube-keydb` with its base data path in `/mash/peertube-keydb`
- `mash-netbox-valkey` with its base data path in `/mash/netbox-valkey`
- `mash-peertube-valkey` with its base data path in `/mash/peertube-valkey`
These instances reuse the `mash` user and group and the `/mash` data path, but are not in conflict with each other.
## Adjust the configuration of the base host
Now that we've created separate KeyDB instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their KeyDB instances.
Now that we've created separate Valkey instances for both PeerTube and NetBox, we need to put them to use by editing the `vars.yml` file of the main host (the one that installs PeerTbue and NetBox) to wire them to their Valkey instances.
You'll need configuration (`inventory/host_vars/mash.example.com/vars.yml`) like this:
@ -152,17 +152,17 @@ netbox_enabled: true
# Other NetBox configuration here
# Point NetBox to its dedicated KeyDB instance
netbox_environment_variable_redis_host: mash-netbox-keydb
netbox_environment_variable_redis_cache_host: mash-netbox-keydb
# Point NetBox to its dedicated Valkey instance
netbox_environment_variable_redis_host: mash-netbox-valkey
netbox_environment_variable_redis_cache_host: mash-netbox-valkey
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated KeyDB service (mash-netbox-keydb.service)
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated Valkey service (mash-netbox-valkey.service)
netbox_systemd_required_services_list_custom:
- mash-netbox-keydb.service
- mash-netbox-valkey.service
# Make sure the NetBox container is connected to the container network of its dedicated KeyDB service (mash-netbox-keydb)
# Make sure the NetBox container is connected to the container network of its dedicated Valkey service (mash-netbox-valkey)
netbox_container_additional_networks_custom:
- mash-netbox-keydb
- mash-netbox-valkey
########################################################################
# #
@ -180,16 +180,16 @@ netbox_container_additional_networks_custom:
# Other PeerTube configuration here
# Point PeerTube to its dedicated KeyDB instance
peertube_config_redis_hostname: mash-peertube-keydb
# Point PeerTube to its dedicated Valkey instance
peertube_config_redis_hostname: mash-peertube-valkey
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated KeyDB service (mash-peertube-keydb.service)
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated Valkey service (mash-peertube-valkey.service)
peertube_systemd_required_services_list_custom:
- "mash-peertube-keydb.service"
- "mash-peertube-valkey.service"
# Make sure the PeerTube container is connected to the container network of its dedicated KeyDB service (mash-peertube-keydb)
# Make sure the PeerTube container is connected to the container network of its dedicated Valkey service (mash-peertube-valkey)
peertube_container_additional_networks_custom:
- "mash-peertube-keydb"
- "mash-peertube-valkey"
########################################################################
# #
@ -201,9 +201,9 @@ peertube_container_additional_networks_custom:
## Questions & Answers
**Can't I just use the same KeyDB instance for multiple services?**
**Can't I just use the same Valkey instance for multiple services?**
> You may or you may not. See the [KeyDB](services/keydb.md) documentation for why you shouldn't do this.
> You may or you may not. See the [Valkey](services/valkey.md) documentation for why you shouldn't do this.
**Can't I just create one host and a separate stack for each service** (e.g. Nextcloud + all dependencies on one inventory host; PeerTube + all dependencies on another inventory host; with both inventory hosts targetting the same server)?

View file

@ -16,9 +16,9 @@ This service requires the following other services:
- (optional) a MySQL / [MariaDB](mariadb.md) database - if enabled for your Ansible inventory host (and you don't also enable Postgres), Authelia will be connected to the MariaDB server automatically
- or SQLite, used by default when none of the above database choices is enabled for your Ansible inventory host
- (optional, but recommended) [KeyDB](keydb.md)
- (optional, but recommended) [Valkey](valkey.md)
- for storing session information in a persistent manner
- if KeyDB is not enabled, session information is stored in-memory and restarting Authelia destroys user sessions
- if Valkey is not enabled, session information is stored in-memory and restarting Authelia destroys user sessions
- a [Traefik](traefik.md) reverse-proxy server
- for serving the Authelia portal website
@ -87,11 +87,11 @@ authelia_config_access_control_rules:
- domain: 'service1.example.com'
policy: one_factor
# The configuration below connects Authelia to the KeyDB instance, for session storage purposes.
# You may wish to run a separate KeyDB instance for Authelia, because KeyDB is not multi-tenant.
# The configuration below connects Authelia to the Valkey instance, for session storage purposes.
# You may wish to run a separate Valkey instance for Authelia, because Valkey is not multi-tenant.
# Read more in docs/services/redis.md.
# If KeyDB is not available, session data will be stored in memory and will be lost on container restart.
authelia_config_session_redis_host: "{{ keydb_identifier if keydb_enabled else '' }}"
# If Valkey is not available, session data will be stored in memory and will be lost on container restart.
authelia_config_session_redis_host: "{{ valkey_identifier if valkey_enabled else '' }}"
########################################################################
# #
@ -111,9 +111,9 @@ On the Authelia base URL, there's a portal website where you can log in and mana
### Session storage
As mentioned in the default configuration above (see `authelia_config_session_redis_host`), you may wish to run [KeyDB](keydb.md) for storing session data.
As mentioned in the default configuration above (see `authelia_config_session_redis_host`), you may wish to run [Valkey](valkey.md) for storing session data.
You may wish to run a separate KeyDB instance for Authelia, because KeyDB is not multi-tenant. See [our KeyDB documentation page](keydb.md) for additional details. When running a separate instance of KeyDB, you may need to connect Authelia to the KeyDB instance's container network via the `authelia_container_additional_networks_custom` variable.
You may wish to run a separate Valkey instance for Authelia, because Valkey is not multi-tenant. See [our Valkey documentation page](valkey.md) for additional details. When running a separate instance of Valkey, you may need to connect Authelia to the Valkey instance's container network via the `authelia_container_additional_networks_custom` variable.
### Authentication storage providers

View file

@ -10,7 +10,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -32,7 +32,7 @@ authentik_hostname: authentik.example.com
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
authentik_secret_key: ''
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -41,28 +41,28 @@ authentik_secret_key: ''
########################################################################
```
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to authentik](#creating-a-keydb-instance-dedicated-to-authentik).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to authentik](#creating-a-valkey-instance-dedicated-to-authentik).
If you're only running authentik on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-authentik).
If you're only running authentik on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-authentik).
#### Using the shared KeyDB instance for authentik
#### Using the shared Valkey instance for authentik
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook authentik to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook authentik to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -75,16 +75,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point authentik to the shared KeyDB instance
authentik_config_redis_hostname: "{{ keydb_identifier }}"
# Point authentik to the shared Valkey instance
authentik_config_redis_hostname: "{{ valkey_identifier }}"
# Make sure the authentik service (mash-authentik.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the authentik service (mash-authentik.service) starts after the shared KeyDB service (mash-valkey.service)
authentik_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the authentik container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the authentik container is connected to the container network of the shared KeyDB service (mash-valkey)
authentik_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -93,12 +93,12 @@ authentik_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to authentik](#creating-a-keydb-instance-dedicated-to-authentik).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to authentik](#creating-a-valkey-instance-dedicated-to-authentik).
#### Creating a KeyDB instance dedicated to authentik
#### Creating a Valkey instance dedicated to authentik
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -134,20 +134,20 @@ mash_playbook_service_base_directory_name_prefix: 'authentik-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-authentik-keydb` instance on this host with its data in `/mash/authentik-keydb`.
This will create a `mash-authentik-valkey` instance on this host with its data in `/mash/authentik-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/authentik.example.com/vars.yml`) like this:
@ -160,16 +160,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/aut
# Base configuration as shown above
# Point authentik to its dedicated KeyDB instance
authentik_config_redis_hostname: mash-authentik-keydb
# Point authentik to its dedicated Valkey instance
authentik_config_redis_hostname: mash-authentik-valkey
# Make sure the authentik service (mash-authentik.service) starts after its dedicated KeyDB service (mash-authentik-keydb.service)
# Make sure the authentik service (mash-authentik.service) starts after its dedicated KeyDB service (mash-authentik-valkey.service)
authentik_systemd_required_services_list_custom:
- "mash-authentik-keydb.service"
- "mash-authentik-valkey.service"
# Make sure the authentik container is connected to the container network of its dedicated KeyDB service (mash-authentik-keydb)
# Make sure the authentik container is connected to the container network of its dedicated KeyDB service (mash-authentik-valkey)
authentik_container_additional_networks_custom:
- "mash-authentik-keydb"
- "mash-authentik-valkey"
########################################################################
# #
@ -181,7 +181,7 @@ authentik_container_additional_networks_custom:
## Installation
If you've decided to install a dedicated KeyDB instance for authentik, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `authentik.example.com-deps`), before running installation for the main one (e.g. `authentik.example.com`).
If you've decided to install a dedicated Valkey instance for authentik, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `authentik.example.com-deps`), before running installation for the main one (e.g. `authentik.example.com`).
## Usage

View file

@ -8,7 +8,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -30,7 +30,7 @@ funkwhale_hostname: mash.example.com
# Put a strong secret below, generated with `pwgen -s 64 1` or in another way
funkwhale_django_secret_key: ''
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -39,28 +39,28 @@ funkwhale_django_secret_key: ''
########################################################################
```
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to funkwhale](#creating-a-keydb-instance-dedicated-to-funkwhale).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to funkwhale](#creating-a-valkey-instance-dedicated-to-funkwhale).
If you're only running funkwhale on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-funkwhale).
If you're only running funkwhale on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-funkwhale).
#### Using the shared KeyDB instance for funkwhale
#### Using the shared Valkey instance for funkwhale
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook funkwhale to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook funkwhale to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -73,16 +73,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point funkwhale to the shared KeyDB instance
funkwhale_config_redis_hostname: "{{ keydb_identifier }}"
# Point funkwhale to the shared Valkey instance
funkwhale_config_redis_hostname: "{{ valkey_identifier }}"
# Make sure the funkwhale API service (mash-funkwhale-api.service) starts after the shared KeyDB service
funkwhale_api_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the funkwhale API service (mash-funkwhale-api.service) is connected to the container network of the shared KeyDB service
funkwhale_api_container_additional_networks_custom:
- "{{ keydb_container_network }}"
- "{{ valkey_container_network }}"
########################################################################
# #
@ -91,12 +91,12 @@ funkwhale_api_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to funkwhale](#creating-a-keydb-instance-dedicated-to-funkwhale).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to funkwhale](#creating-a-valkey-instance-dedicated-to-funkwhale).
#### Creating a KeyDB instance dedicated to funkwhale
#### Creating a Valkey instance dedicated to funkwhale
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -132,20 +132,20 @@ mash_playbook_service_base_directory_name_prefix: 'funkwhale-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-funkwhale-keydb` instance on this host with its data in `/mash/funkwhale-keydb`.
This will create a `mash-funkwhale-valkey` instance on this host with its data in `/mash/funkwhale-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/funkwhale.example.com/vars.yml`) like this:
@ -158,16 +158,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/fun
# Base configuration as shown above
# Point funkwhale to its dedicated KeyDB instance
funkwhale_config_redis_hostname: mash-funkwhale-keydb
# Point funkwhale to its dedicated Valkey instance
funkwhale_config_redis_hostname: mash-funkwhale-valkey
# Make sure the funkwhale API service (mash-funkwhale-api.service) starts after its dedicated KeyDB service
funkwhale_api_systemd_required_services_list_custom:
- "mash-funkwhale-keydb.service"
- "mash-funkwhale-valkey.service"
# Make sure the funkwhale API service (mash-funkwhale-api.service) is connected to the container network of its dedicated KeyDB service
funkwhale_api_container_additional_networks_custom:
- "mash-funkwhale-keydb"
- "mash-funkwhale-valkey"
########################################################################
# #
@ -179,7 +179,7 @@ funkwhale_api_container_additional_networks_custom:
## Installation
If you've decided to install a dedicated KeyDB instance for funkwhale, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `funkwhale.example.com-deps`), before running installation for the main one (e.g. `funkwhale.example.com`).
If you've decided to install a dedicated Valkey instance for funkwhale, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `funkwhale.example.com-deps`), before running installation for the main one (e.g. `funkwhale.example.com`).
## Usage

View file

@ -8,7 +8,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -34,7 +34,7 @@ lago_api_environment_variable_lago_rsa_private_key: ''
# unless you'd like to run a server with public registration enabled.
lago_front_environment_variable_lago_disable_signup: false
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -63,28 +63,28 @@ We recommend installing with public registration enabled at first, creating your
It should be noted that disabling public signup with this variable merely disables the Sign-Up page in the web interface, but [does not actually disable signups due to a Lago bug](https://github.com/getlago/lago/issues/220).
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to Lago](#creating-a-keydb-instance-dedicated-to-lago).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to Lago](#creating-a-valkey-instance-dedicated-to-lago).
If you're only running Lago on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-lago).
If you're only running Lago on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-lago).
#### Using the shared KeyDB instance for Lago
#### Using the shared Valkey instance for Lago
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook Lago to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook Lago to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -97,16 +97,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point Lago to the shared KeyDB instance
lago_redis_hostname: "{{ keydb_identifier }}"
# Point Lago to the shared Valkey instance
lago_redis_hostname: "{{ valkey_identifier }}"
# Make sure the Lago service (mash-lago.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the Lago service (mash-lago.service) starts after the shared KeyDB service (mash-valkey.service)
lago_api_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the Lago container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the Lago container is connected to the container network of the shared KeyDB service (mash-valkey)
lago_api_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -115,11 +115,11 @@ lago_api_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to Lago](#creating-a-keydb-instance-dedicated-to-lago).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to Lago](#creating-a-valkey-instance-dedicated-to-lago).
#### Creating a KeyDB instance dedicated to Lago
#### Creating a Valkey instance dedicated to Lago
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -155,20 +155,20 @@ mash_playbook_service_base_directory_name_prefix: 'lago-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-lago-keydb` instance on this host with its data in `/mash/lago-keydb`.
This will create a `mash-lago-valkey` instance on this host with its data in `/mash/lago-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/lago.example.com/vars.yml`) like this:
@ -181,16 +181,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/lag
# Base configuration as shown above
# Point Lago to its dedicated KeyDB instance
lago_redis_hostname: mash-lago-keydb
# Point Lago to its dedicated Valkey instance
lago_redis_hostname: mash-lago-valkey
# Make sure the Lago service (mash-lago.service) starts after its dedicated KeyDB service (mash-lago-keydb.service)
# Make sure the Lago service (mash-lago.service) starts after its dedicated KeyDB service (mash-lago-valkey.service)
lago_api_systemd_required_services_list_custom:
- "mash-lago-keydb.service"
- "mash-lago-valkey.service"
# Make sure the Lago container is connected to the container network of its dedicated KeyDB service (mash-lago-keydb)
# Make sure the Lago container is connected to the container network of its dedicated KeyDB service (mash-lago-valkey)
lago_api_container_additional_networks_custom:
- "mash-lago-keydb"
- "mash-lago-valkey"
########################################################################
# #

View file

@ -8,7 +8,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -38,7 +38,7 @@ netbox_environment_variable_superuser_email: your.email@example.com
# Changing the password subsequently will not affect the user's password.
netbox_environment_variable_superuser_password: ''
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -60,28 +60,28 @@ If `netbox_environment_variable_superuser_*` variables are specified, NetBox wil
[Single-Sign-On](#single-sign-on-sso-integration) is also supported.
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to NetBox](#creating-a-keydb-instance-dedicated-to-netbox).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to NetBox](#creating-a-valkey-instance-dedicated-to-netbox).
If you're only running NetBox on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-netbox).
If you're only running NetBox on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-netbox).
#### Using the shared KeyDB instance for NetBox
#### Using the shared Valkey instance for NetBox
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook NetBox to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook NetBox to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -94,17 +94,17 @@ keydb_enabled: true
# Base configuration as shown above
# Point NetBox to the shared KeyDB instance
netbox_environment_variable_redis_host: "{{ keydb_identifier }}"
netbox_environment_variable_redis_cache_host: "{{ keydb_identifier }}"
# Point NetBox to the shared Valkey instance
netbox_environment_variable_redis_host: "{{ valkey_identifier }}"
netbox_environment_variable_redis_cache_host: "{{ valkey_identifier }}"
# Make sure the NetBox service (mash-netbox.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the NetBox service (mash-netbox.service) starts after the shared KeyDB service (mash-valkey.service)
netbox_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the NetBox container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the NetBox container is connected to the container network of the shared KeyDB service (mash-valkey)
netbox_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -113,12 +113,12 @@ netbox_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to NetBox](#creating-a-keydb-instance-dedicated-to-netbox).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to NetBox](#creating-a-valkey-instance-dedicated-to-netbox).
#### Creating a KeyDB instance dedicated to NetBox
#### Creating a Valkey instance dedicated to NetBox
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -154,20 +154,20 @@ mash_playbook_service_base_directory_name_prefix: 'netbox-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-netbox-keydb` instance on this host with its data in `/mash/netbox-keydb`.
This will create a `mash-netbox-valkey` instance on this host with its data in `/mash/netbox-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/netbox.example.com/vars.yml`) like this:
@ -181,17 +181,17 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/net
# Base configuration as shown above
# Point NetBox to its dedicated KeyDB instance
netbox_environment_variable_redis_host: mash-netbox-keydb
netbox_environment_variable_redis_cache_host: mash-netbox-keydb
# Point NetBox to its dedicated Valkey instance
netbox_environment_variable_redis_host: mash-netbox-valkey
netbox_environment_variable_redis_cache_host: mash-netbox-valkey
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated KeyDB service (mash-netbox-keydb.service)
# Make sure the NetBox service (mash-netbox.service) starts after its dedicated KeyDB service (mash-netbox-valkey.service)
netbox_systemd_required_services_list_custom:
- "mash-netbox-keydb.service"
- "mash-netbox-valkey.service"
# Make sure the NetBox container is connected to the container network of its dedicated KeyDB service (mash-netbox-keydb)
# Make sure the NetBox container is connected to the container network of its dedicated KeyDB service (mash-netbox-valkey)
netbox_container_additional_networks_custom:
- "mash-netbox-keydb"
- "mash-netbox-valkey"
########################################################################
# #
@ -257,7 +257,7 @@ For additional environment variables controlling groups and permissions for new
## Installation
If you've decided to install a dedicated KeyDB instance for NetBox, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `netbox.example.com-deps`), before running installation for the main one (e.g. `netbox.example.com`).
If you've decided to install a dedicated Valkey instance for NetBox, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `netbox.example.com-deps`), before running installation for the main one (e.g. `netbox.example.com`).
## Usage

View file

@ -9,7 +9,7 @@ This service requires the following other services:
- a [Postgres](postgres.md) database
- a [Traefik](traefik.md) reverse-proxy server
- (optional) a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- (optional) a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- (optional) the [exim-relay](exim-relay.md) mailer
@ -29,7 +29,7 @@ nextcloud_enabled: true
nextcloud_hostname: mash.example.com
nextcloud_path_prefix: /nextcloud
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -42,50 +42,50 @@ In the example configuration above, we configure the service to be hosted at `ht
You can remove the `nextcloud_path_prefix` variable definition, to make it default to `/`, so that the service is served at `https://mash.example.com/`.
### KeyDB
### Valkey
KeyDB can **optionally** be enabled to improve Nextcloud performance.
It's dubious whether using using KeyDB helps much, so we recommend that you **start without** it, for a simpler deployment.
Valkey can **optionally** be enabled to improve Nextcloud performance.
It's dubious whether using using Valkey helps much, so we recommend that you **start without** it, for a simpler deployment.
To learn more, read the [Memory caching](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html) section of the Nextcloud documentation.
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to Nextcloud](#creating-a-keydb-instance-dedicated-to-nextcloud).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require Valkey on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to Nextcloud](#creating-a-valkey-instance-dedicated-to-nextcloud).
If you're only running Nextcloud on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-nextcloud).
If you're only running Nextcloud on this server and don't need to use Valkey for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-nextcloud).
**Regardless** of the method of installing KeyDB, you may need to adjust your Nextcloud configuration file (e.g. `/mash/nextcloud/data/config/config.php`) to **add** this:
**Regardless** of the method of installing Valkey, you may need to adjust your Nextcloud configuration file (e.g. `/mash/nextcloud/data/config/config.php`) to **add** this:
```php
'memcache.distributed' => '\OC\Memcache\KeyDB',
'memcache.locking' => '\OC\Memcache\KeyDB',
'keydb' => [
'host' => 'REDIS_HOSTNAME_HERE',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
'host' => 'VALKEY_HOSTNAME_HERE',
'port' => 6379,
],
```
Where `REDIS_HOSTNAME_HERE` is to be replaced with:
Where `VALKEY_HOSTNAME_HERE` is to be replaced with:
- `mash-nextcloud-keydb`, when [Creating a KeyDB instance dedicated to Nextcloud](#creating-a-keydb-instance-dedicated-to-nextcloud)
- `mash-keydb`, when [using a single KeyDB instance](#using-the-shared-keydb-instance-for-nextcloud).
- `mash-nextcloud-valkey`, when [Creating a Valkey instance dedicated to Nextcloud](#creating-a-valkey-instance-dedicated-to-nextcloud)
- `mash-valkey`, when [using a single Valkey instance](#using-the-shared-valkey-instance-for-nextcloud).
#### Using the shared KeyDB instance for Nextcloud
#### Using the shared Valkey instance for Nextcloud
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook Nextcloud to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook Nextcloud to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -98,16 +98,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point Nextcloud to the shared KeyDB instance
nextcloud_redis_hostname: "{{ keydb_identifier }}"
# Point Nextcloud to the shared Valkey instance
nextcloud_redis_hostname: "{{ valkey_identifier }}"
# Make sure the Nextcloud service (mash-nextcloud.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the Nextcloud service (mash-nextcloud.service) starts after the shared KeyDB service (mash-valkey.service)
nextcloud_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the Nextcloud container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the Nextcloud container is connected to the container network of the shared KeyDB service (mash-valkey)
nextcloud_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -115,11 +115,11 @@ nextcloud_container_additional_networks_custom:
# #
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to Nextcloud](#creating-a-keydb-instance-dedicated-to-nextcloud).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to Nextcloud](#creating-a-valkey-instance-dedicated-to-nextcloud).
#### Creating a KeyDB instance dedicated to Nextcloud
#### Creating a Valkey instance dedicated to Nextcloud
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -155,20 +155,20 @@ mash_playbook_service_base_directory_name_prefix: 'nextcloud-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-nextcloud-keydb` instance on this host with its data in `/mash/nextcloud-keydb`.
This will create a `mash-nextcloud-valkey` instance on this host with its data in `/mash/nextcloud-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/nextcloud.example.com/vars.yml`) like this:
@ -181,16 +181,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/nex
# Base configuration as shown above
# Point Nextcloud to its dedicated KeyDB instance
nextcloud_redis_hostname: mash-nextcloud-keydb
# Point Nextcloud to its dedicated Valkey instance
nextcloud_redis_hostname: mash-nextcloud-valkey
# Make sure the Nextcloud service (mash-nextcloud.service) starts after its dedicated KeyDB service (mash-nextcloud-keydb.service)
# Make sure the Nextcloud service (mash-nextcloud.service) starts after its dedicated KeyDB service (mash-nextcloud-valkey.service)
nextcloud_systemd_required_services_list_custom:
- "mash-nextcloud-keydb.service"
- "mash-nextcloud-valkey.service"
# Make sure the Nextcloud container is connected to the container network of its dedicated KeyDB service (mash-nextcloud-keydb)
# Make sure the Nextcloud container is connected to the container network of its dedicated KeyDB service (mash-nextcloud-valkey)
nextcloud_container_additional_networks_custom:
- "mash-nextcloud-keydb"
- "mash-nextcloud-valkey"
########################################################################
# #
@ -230,7 +230,7 @@ nextcloud_container_image_customizations_samba_enabled: true
## Installation
If you've decided to install a dedicated KeyDB instance for Nextcloud, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `nextcloud.example.com-deps`), before running installation for the main one (e.g. `nextcloud.example.com`).
If you've decided to install a dedicated Valkey instance for Nextcloud, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `nextcloud.example.com-deps`), before running installation for the main one (e.g. `nextcloud.example.com`).
## Usage

View file

@ -34,28 +34,28 @@ notfellchen_hostname: notfellchen.example.com
########################################################################
```
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to notfellchen](#creating-a-keydb-instance-dedicated-to-notfellchen).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to notfellchen](#creating-a-valkey-instance-dedicated-to-notfellchen).
If you're only running notfellchen on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-notfellchen).
If you're only running notfellchen on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-notfellchen).
#### Using the shared KeyDB instance for notfellchen
#### Using the shared Valkey instance for notfellchen
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook notfellchen to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook notfellchen to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -68,16 +68,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point notfellchen to the shared KeyDB instance
notfellchen_config_redis_hostname: "{{ keydb_identifier }}"
# Point notfellchen to the shared Valkey instance
notfellchen_config_redis_hostname: "{{ valkey_identifier }}"
# Make sure the notfellchen API service (mash-notfellchen.service) starts after the shared KeyDB service
notfellchen_api_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the notfellchen API service (mash-notfellchen.service) is connected to the container network of the shared KeyDB service
notfellchen_container_additional_networks_custom:
- "{{ keydb_container_network }}"
- "{{ valkey_container_network }}"
########################################################################
# #
@ -86,12 +86,12 @@ notfellchen_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to notfellchen](#creating-a-keydb-instance-dedicated-to-notfellchen).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to notfellchen](#creating-a-valkey-instance-dedicated-to-notfellchen).
#### Creating a KeyDB instance dedicated to notfellchen
#### Creating a Valkey instance dedicated to notfellchen
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -127,20 +127,20 @@ mash_playbook_service_base_directory_name_prefix: 'notfellchen-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-notfellchen-keydb` instance on this host with its data in `/mash/notfellchen-keydb`.
This will create a `mash-notfellchen-valkey` instance on this host with its data in `/mash/notfellchen-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/notfellchen.example.com/vars.yml`) like this:
@ -153,16 +153,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/not
# Base configuration as shown above
# Point notfellchen to its dedicated KeyDB instance
notfellchen_config_redis_hostname: mash-notfellchen-keydb
# Point notfellchen to its dedicated Valkey instance
notfellchen_config_redis_hostname: mash-notfellchen-valkey
# Make sure the notfellchen ervice (mash-notfellchen.service) starts after its dedicated KeyDB service
notfellchen_systemd_required_services_list_custom:
- "mash-notfellchen-keydb.service"
- "mash-notfellchen-valkey.service"
# Make sure the notfellchen service (mash-notfellchen.service) is connected to the container network of its dedicated KeyDB service
notfellchen_api_container_additional_networks_custom:
- "mash-notfellchen-keydb"
- "mash-notfellchen-valkey"
########################################################################
# #

View file

@ -8,7 +8,7 @@
This service requires the following other services:
- [Postgres](postgres.md)
- [KeyDB](keydb.md)
- [Valkey](valkey.md)
- a [Traefik](traefik.md) reverse-proxy server
@ -30,14 +30,14 @@ outline_hostname: outline.example.com
# This must be generated with: `openssl rand -hex 32`
outline_environment_variable_secret_key: ''
# The configuration below connects Outline to the KeyDB instance, for session storage purposes.
# You may wish to run a separate KeyDB instance for Outline, because KeyDB is not multi-tenant.
# Read more in docs/services/keydb.md.
outline_redis_hostname: "{{ keydb_identifier if keydb_enabled else '' }}"
# The configuration below connects Outline to the Valkey instance, for session storage purposes.
# You may wish to run a separate Valkey instance for Outline, because Valkey is not multi-tenant.
# Read more in docs/services/valkey.md.
outline_redis_hostname: "{{ valkey_identifier if valkey_enabled else '' }}"
outline_container_additional_networks_custom: |
{{
[keydb_container_network]
[valkey_container_network]
}}
# By default, files are stored locally.

View file

@ -9,7 +9,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -33,7 +33,7 @@ paperless_hostname: paperless.example.org
# paperless_admin_user: USERNAME
# paperless_admin_password: SECURE_PASSWORD
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -42,28 +42,28 @@ paperless_hostname: paperless.example.org
########################################################################
```
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to paperless-ngx](#creating-a-keydb-instance-dedicated-to-paperless-ngx).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to paperless-ngx](#creating-a-valkey-instance-dedicated-to-paperless-ngx).
If you're only running paperless-ngx on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-paperless).
If you're only running paperless-ngx on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-paperless).
#### Using the shared KeyDB instance for paperless-ngx
#### Using the shared Valkey instance for paperless-ngx
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook paperless to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook paperless to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -76,16 +76,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point paperless to the shared KeyDB instance
paperless_redis_hostname: "{{ keydb_identifier }}"
# Point paperless to the shared Valkey instance
paperless_redis_hostname: "{{ valkey_identifier }}"
# Make sure the paperless service (mash-paperless.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the paperless service (mash-paperless.service) starts after the shared KeyDB service (mash-valkey.service)
paperless_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the paperless container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the paperless container is connected to the container network of the shared KeyDB service (mash-valkey)
paperless_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -94,12 +94,12 @@ paperless_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to paperless-ngx](#creating-a-keydb-instance-dedicated-to-paperless-ngx).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to paperless-ngx](#creating-a-valkey-instance-dedicated-to-paperless-ngx).
#### Creating a KeyDB instance dedicated to paperless
#### Creating a Valkey instance dedicated to paperless
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -135,20 +135,20 @@ mash_playbook_service_base_directory_name_prefix: 'paperless-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-paperless-keydb` instance on this host with its data in `/mash/paperless-keydb`.
This will create a `mash-paperless-valkey` instance on this host with its data in `/mash/paperless-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/paperless.example.org/vars.yml`) like this:
@ -161,16 +161,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/pap
# Base configuration as shown above
# Point paperless to its dedicated KeyDB instance
paperless_redis_hostname: mash-paperless-keydb
# Point paperless to its dedicated Valkey instance
paperless_redis_hostname: mash-paperless-valkey
# Make sure the paperless service (mash-paperless.service) starts after its dedicated KeyDB service (mash-paperless-keydb.service)
# Make sure the paperless service (mash-paperless.service) starts after its dedicated KeyDB service (mash-paperless-valkey.service)
paperless_systemd_required_services_list_custom:
- "mash-paperless-keydb.service"
- "mash-paperless-valkey.service"
# Make sure the paperless container is connected to the container network of its dedicated KeyDB service (mash-paperless-keydb)
# Make sure the paperless container is connected to the container network of its dedicated KeyDB service (mash-paperless-valkey)
paperless_container_additional_networks_custom:
- "mash-paperless-keydb"
- "mash-paperless-valkey"
########################################################################
# #
@ -182,7 +182,7 @@ paperless_container_additional_networks_custom:
## Installation
If you've decided to install a dedicated KeyDB instance for paperless, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `paperless.example.org-deps`), before running installation for the main one (e.g. `paperless.example.org`).
If you've decided to install a dedicated Valkey instance for paperless, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `paperless.example.org-deps`), before running installation for the main one (e.g. `paperless.example.org`).
## Usage

View file

@ -8,7 +8,7 @@
This service requires the following other services:
- a [Postgres](postgres.md) database
- a [KeyDB](keydb.md) data-store, installation details [below](#keydb)
- a [Valkey](valkey.md) data-store, installation details [below](#valkey)
- a [Traefik](traefik.md) reverse-proxy server
@ -47,7 +47,7 @@ peertube_config_root_user_initial_password: ''
# Then, replace the example IP range below, and re-run the playbook.
# peertube_trusted_proxies_values_custom: ["172.21.0.0/16"]
# KeyDB configuration, as described below
# Valkey configuration, as described below
########################################################################
# #
@ -60,28 +60,28 @@ In the example configuration above, we configure the service to be hosted at `ht
Hosting PeerTube under a subpath (by configuring the `peertube_path_prefix` variable) does not seem to be possible right now, due to PeerTube limitations.
### KeyDB
### Valkey
As described on the [KeyDB](keydb.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate KeyDB instance for each service. See [Creating a KeyDB instance dedicated to PeerTube](#creating-a-keydb-instance-dedicated-to-peertube).
As described on the [Valkey](valkey.md) documentation page, if you're hosting additional services which require KeyDB on the same server, you'd better go for installing a separate Valkey instance for each service. See [Creating a Valkey instance dedicated to PeerTube](#creating-a-valkey-instance-dedicated-to-peertube).
If you're only running PeerTube on this server and don't need to use KeyDB for anything else, you can [use a single KeyDB instance](#using-the-shared-keydb-instance-for-peertube).
If you're only running PeerTube on this server and don't need to use KeyDB for anything else, you can [use a single Valkey instance](#using-the-shared-valkey-instance-for-peertube).
#### Using the shared KeyDB instance for PeerTube
#### Using the shared Valkey instance for PeerTube
To install a single (non-dedicated) KeyDB instance (`mash-keydb`) and hook PeerTube to it, add the following **additional** configuration:
To install a single (non-dedicated) Valkey instance (`mash-valkey`) and hook PeerTube to it, add the following **additional** configuration:
```yaml
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
@ -94,16 +94,16 @@ keydb_enabled: true
# Base configuration as shown above
# Point PeerTube to the shared KeyDB instance
peertube_config_redis_hostname: "{{ keydb_identifier }}"
# Point PeerTube to the shared Valkey instance
peertube_config_redis_hostname: "{{ valkey_identifier }}"
# Make sure the PeerTube service (mash-peertube.service) starts after the shared KeyDB service (mash-keydb.service)
# Make sure the PeerTube service (mash-peertube.service) starts after the shared KeyDB service (mash-valkey.service)
peertube_systemd_required_services_list_custom:
- "{{ keydb_identifier }}.service"
- "{{ valkey_identifier }}.service"
# Make sure the PeerTube container is connected to the container network of the shared KeyDB service (mash-keydb)
# Make sure the PeerTube container is connected to the container network of the shared KeyDB service (mash-valkey)
peertube_container_additional_networks_custom:
- "{{ keydb_identifier }}"
- "{{ valkey_identifier }}"
########################################################################
# #
@ -112,12 +112,12 @@ peertube_container_additional_networks_custom:
########################################################################
```
This will create a `mash-keydb` KeyDB instance on this host.
This will create a `mash-valkey` Valkey instance on this host.
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a KeyDB instance dedicated to PeerTube](#creating-a-keydb-instance-dedicated-to-peertube).
This is only recommended if you won't be installing other services which require KeyDB. Alternatively, go for [Creating a Valkey instance dedicated to PeerTube](#creating-a-valkey-instance-dedicated-to-peertube).
#### Creating a KeyDB instance dedicated to PeerTube
#### Creating a Valkey instance dedicated to PeerTube
The following instructions are based on the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation.
@ -153,20 +153,20 @@ mash_playbook_service_base_directory_name_prefix: 'peertube-'
########################################################################
# #
# keydb #
# valkey #
# #
########################################################################
keydb_enabled: true
valkey_enabled: true
########################################################################
# #
# /keydb #
# /valkey #
# #
########################################################################
```
This will create a `mash-peertube-keydb` instance on this host with its data in `/mash/peertube-keydb`.
This will create a `mash-peertube-valkey` instance on this host with its data in `/mash/peertube-valkey`.
Then, adjust your main inventory host's variables file (`inventory/host_vars/peertube.example.com/vars.yml`) like this:
@ -179,16 +179,16 @@ Then, adjust your main inventory host's variables file (`inventory/host_vars/pee
# Base configuration as shown above
# Point PeerTube to its dedicated KeyDB instance
peertube_config_redis_hostname: mash-peertube-keydb
# Point PeerTube to its dedicated Valkey instance
peertube_config_redis_hostname: mash-peertube-valkey
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated KeyDB service (mash-peertube-keydb.service)
# Make sure the PeerTube service (mash-peertube.service) starts after its dedicated KeyDB service (mash-peertube-valkey.service)
peertube_systemd_required_services_list_custom:
- "mash-peertube-keydb.service"
- "mash-peertube-valkey.service"
# Make sure the PeerTube container is connected to the container network of its dedicated KeyDB service (mash-peertube-keydb)
# Make sure the PeerTube container is connected to the container network of its dedicated KeyDB service (mash-peertube-valkey)
peertube_container_additional_networks_custom:
- "mash-peertube-keydb"
- "mash-peertube-valkey"
########################################################################
# #
@ -200,7 +200,7 @@ peertube_container_additional_networks_custom:
## Installation
If you've decided to install a dedicated KeyDB instance for PeerTube, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `peertube.example.com-deps`), before running installation for the main one (e.g. `peertube.example.com`).
If you've decided to install a dedicated Valkey instance for PeerTube, make sure to first do [installation](../installing.md) for the supplementary inventory host (e.g. `peertube.example.com-deps`), before running installation for the main one (e.g. `peertube.example.com`).
## Usage

View file

@ -2,7 +2,7 @@
[Redis](https://redis.io/) is an open source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker.
We used to used to advocate for using Redis, but since [Redis is now "source available"](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) we recommend that you use [KeyDB](keydb.md) instead. KeyDB is compatible with Redis, so switching should be straightforward. You can learn more about the switch from Redis to KeyDB in [this changelog entry](https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/50813c600db1c47b1f3e76707b81fe05d6c46ef5/CHANGELOG.md#backward-compatibility-break-the-playbook-now-defaults-to-keydb-instead-of-redis) for [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy).
We used to used to advocate for using Redis, but since [Redis is now "source available"](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) we recommend that you use [Valkey](valkey.md) instead. Valkey is compatible with Redis, so switching should be straightforward. You can learn more about the switch from Redis to KeyDB in [this changelog entry](https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/50813c600db1c47b1f3e76707b81fe05d6c46ef5/CHANGELOG.md#backward-compatibility-break-the-playbook-now-defaults-to-valkey-instead-of-redis) for [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy).Since 2024-11-23, we recommend [Valkey](valkey.md) instead of [KeyDB](./keydb.md).
Some of the services installed by this playbook require a Redis data store.

39
docs/services/valkey.md Normal file
View file

@ -0,0 +1,39 @@
# Valkey
[Valkey](https://valkey.io/) is a flexible distributed key-value datastore that is optimized for caching and other realtime workloads.
We used to advocate for using [Redis](redis.md), but since [Redis is now "source available"](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) we recommend that you use [KeyDB](./keydb.md) or Valkey instead. Valkey is compatible with Redis, so switching should be straightforward. You can learn more about the switch from Redis to KeyDB in [this changelog entry](https://github.com/spantaleev/matrix-docker-ansible-deploy/blob/50813c600db1c47b1f3e76707b81fe05d6c46ef5/CHANGELOG.md#backward-compatibility-break-the-playbook-now-defaults-to-valkey-instead-of-redis) for [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy). By similar logic, one may also decide to go with the Valkey alternative to Redis, which is likely better maintained and more compatible (see [this issue](https://github.com/mother-of-all-self-hosting/mash-playbook/issues/247)).
Some of the services installed by this playbook require a Valkey data store.
**Warning**: Because Valkey is not as flexible as [Postgres](postgres.md) when it comes to authentication and data separation, it's **recommended that you run separate Valkey instances** (one for each service). Valkey supports multiple database and a [SELECT](https://valkey.io/commands/select/) command for switching between them. However, **reusing the same Valkey instance is not good enough** because:
- if all services use the same Valkey instance and database (id = 0), services may conflict with one another
- the number of databases is limited to [16 by default](https://github.com/valkey-io/valkey/blob/33f42d7fb597ce28040f184ee57ed86d6f6ffbd8/valkey.conf#L396), which may or may not be enough. With configuration changes, this is solveable.
- some services do not support switching the KeyDB database and always insist on using the default one (id = 0)
- Valkey [does not support different authentication credentials for its different databases](https://stackoverflow.com/a/37262596), so each service can potentially read and modify other services' data
If you're only hosting a single service (like [PeerTube](peertube.md) or [NetBox](netbox.md)) on your server, you can get away with running a single instance. If you're hosting multiple services, you should prepare separate instances for each service.
## Configuration
To enable this service, add the following configuration to your `vars.yml` file and re-run the [installation](../installing.md) process to **host a single instance of the KeyDB service**:
```yaml
########################################################################
# #
# valkey #
# #
########################################################################
valkey_enabled: true
########################################################################
# #
# /valkey #
# #
########################################################################
```
To **host multiple instances of the Valkey service**, follow the [Running multiple instances of the same service on the same host](../running-multiple-instances.md) documentation or the **Valkey** section (if available) of the service you're installing.

View file

@ -86,6 +86,7 @@
| [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) | An open source server agent to help you collect metrics from your stacks, sensors, and systems. | [Link](services/telegraf.md) |
| [Traefik](https://doc.traefik.io/traefik/) | A container-aware reverse-proxy server | [Link](services/traefik.md) |
| [Uptime-kuma](https://uptime.kuma.pet/) | A fancy self-hosted monitoring tool | [Link](services/uptime-kuma.md) |
| [Valkey](https://valkey.io/) | A flexible distributed key-value datastore that is optimized for caching and other realtime workloads. | [Link](services/valkey.md) |
| [Vaultwarden](https://github.com/dani-garcia/vaultwarden) | A lightweight unofficial and compatible implementation of the [Bitwarden](https://bitwarden.com/)password manager | [Link](services/vaultwarden.md) |
| [Versatiles](https://versatiles.org) | A free stack for generating and serving vector tiles from OpenStreetMap. | [Link](services/versatiles.md) |
| [Wetty](https://github.com/butlerx/wetty) | An SSH terminal over HTTP/HTTPS | [Link](services/wetty.md) |

View file

@ -123,6 +123,13 @@ install-service service *extra_args:
# Runs the playbook with --tags=setup-all,start and optional arguments
setup-all *extra_args: (run-tags "setup-all,start" extra_args)
# Runs setup tasks for a single service
setup-service service *extra_args:
just --justfile {{ justfile() }} run \
--tags=setup-{{ service }},start-group \
--extra-vars=group={{ service }} \
--extra-vars=devture_systemd_service_manager_service_restart_mode=one-by-one {{ extra_args }}
# Runs the playbook with the given list of arguments
run +extra_args: _requirements-yml _setup-yml _group-vars-mash-servers
ansible-playbook -i inventory/hosts setup.yml {{ extra_args }}

View file

@ -628,6 +628,11 @@ mash_playbook_devture_systemd_service_manager_services_list_auto_itemized:
{{ ({'name': (telegraf_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'telegraf']} if telegraf_enabled else omit) }}
# /role-specific:telegraf
# role-specific:valkey
- |-
{{ ({'name': (valkey_identifier + '.service'), 'priority': 750, 'groups': ['mash', 'valkey']} if valkey_enabled else omit) }}
# /role-specific:valkey
# role-specific:vaultwarden
- |-
{{ ({'name': (vaultwarden_identifier + '.service'), 'priority': 2000, 'groups': ['mash', 'vaultwarden', 'vaultwarden-server']} if vaultwarden_enabled else omit) }}
@ -5421,6 +5426,33 @@ telegraf_systemd_required_services_list: |
# role-specific:valkey
########################################################################
# #
# valkey #
# #
########################################################################
valkey_enabled: false
valkey_identifier: "{{ mash_playbook_service_identifier_prefix }}valkey"
valkey_uid: "{{ mash_playbook_uid }}"
valkey_gid: "{{ mash_playbook_gid }}"
valkey_base_path: "{{ mash_playbook_base_path }}/{{ mash_playbook_service_base_directory_name_prefix }}valkey"
valkey_arch: "{{ mash_playbook_architecture }}"
########################################################################
# #
# /valkey #
# #
########################################################################
# /role-specific:valkey
# role-specific:vaultwarden
########################################################################
# #

View file

@ -395,6 +395,10 @@
version: v1.23.15-1
name: uptime_kuma
activation_prefix: uptime_kuma_
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-valkey.git
version: v8.0.1-0
name: valkey
activation_prefix: valkey_
- src: git+https://github.com/mother-of-all-self-hosting/ansible-role-vaultwarden.git
version: v1.32.5-1
name: vaultwarden

View file

@ -399,6 +399,10 @@
- role: galaxy/telegraf
# /role-specific:telegraf
# role-specific:valkey
- role: galaxy/valkey
# /role-specific:valkey
# role-specific:vaultwarden
- role: galaxy/vaultwarden
# /role-specific:vaultwarden