♻️ big docs refactoring

This commit is contained in:
HitLuca 2022-07-25 22:18:57 +02:00
parent 5b9fa8bd9c
commit dda3f0246e
92 changed files with 400 additions and 738 deletions

View file

@ -10,7 +10,7 @@ Contributions of every kind have far-ranging consequences. Just as your work dep
## Patient
Asynchronous communication can come with its own frustrations, even in the most responsive of communities. Please remember that our community is largely built on volunteered time, and that questions, contributions, and requests for support may take some time to receive a response. Repeated “bumps” or “reminders” in rapid succession are not good displays of patience. Additionally, it is considered poor manners to ping a specific person with general questions. Pose your question to the community as a whole, and wait patiently for a response.
Asynchronous communication can come with its own frustrations, even in the most responsive of communities. Please remember that our community is largely built on volunteered time, and that questions, contributions, and requests for support may take some time to receive a response. Repeated "bumps" or "reminders" in rapid succession are not good displays of patience. Additionally, it is considered poor manners to ping a specific person with general questions. Pose your question to the community as a whole, and wait patiently for a response.
## Respectful

View file

@ -25,7 +25,7 @@ A typical new application PR will include 2 new files (`docs/applications/applic
* Don't mess with line endings, or tabs vs. spaces.
* Please know that your efforts are appreciated, thanks! :+1:
# Development Environment
## Development Environment
* Development of Ansible-NAS is carried out in [Visual Studio Code](https://code.visualstudio.com/) - you'll get some nice
recommended extensions and task setups if you do the same.

View file

@ -1,10 +1,11 @@
# Ansible NAS
[![CI](https://github.com/davestephens/ansible-nas/workflows/CI/badge.svg)](https://github.com/davestephens/ansible-nas/actions?query=workflow%3ACI) [![Gitter chat](https://img.shields.io/gitter/room/ansible-nas/chat.svg?logo=gitter&style=flat-square)](https://gitter.im/Ansible-NAS/Chat) [![license](https://img.shields.io/github/license/DAVFoundation/api_doc.svg?style=flat-square)](https://github.com/davestephens/ansible-nas/blob/master/LICENSE) [![Ko-fi](https://img.shields.io/static/v1.svg?label=ko-fi&message=Buy%20Me%20A%20Coffee&color=orange&style=flat-square&logo=buy-me-a-coffee)](https://ko-fi.com/davestephens)
[![CI](https://github.com/davestephens/ansible-nas/workflows/CI/badge.svg)](https://github.com/davestephens/ansible-nas/actions?query=workflow%3ACI)
[![Gitter chat](https://img.shields.io/gitter/room/ansible-nas/chat.svg?logo=gitter&style=flat-square)](https://gitter.im/Ansible-NAS/Chat)
[![license](https://img.shields.io/github/license/DAVFoundation/api_doc.svg?style=flat-square)](https://github.com/davestephens/ansible-nas/blob/master/LICENSE)
[![Ko-fi](https://img.shields.io/static/v1.svg?label=ko-fi&message=Buy%20Me%20A%20Coffee&color=orange&style=flat-square&logo=buy-me-a-coffee)](https://ko-fi.com/davestephens)
After getting burned by broken FreeNAS updates one too many times, I figured I
could do a much better job myself using just a stock Ubuntu install, some clever
Ansible config and a bunch of Docker containers.
After getting burned by broken FreeNAS updates one too many times, I figured I could do a much better job myself using just a stock Ubuntu install, some clever Ansible config and a bunch of Docker containers.
## What Ansible-NAS Does
@ -98,19 +99,13 @@ If you have a spare domain name you can configure applications to be accessible
## What This Could Do
Ansible-NAS can run anything that's in a Docker image, which is why Portainer is
included. A NAS configuration is a pretty personal thing based on what you
download, what media you view, how many photos you take...so it's difficult to
please everyone.
Ansible-NAS can run anything that's in a Docker image, which is why Portainer is included. A NAS configuration is a pretty personal thing based on what you download, what media you view, how many photos you take...so it's difficult to please everyone.
That said, if specific functionality you want isn't included and you think
others could benefit, add it and raise a PR!
That said, if specific functionality you want isn't included and you think others could benefit, add it and raise a PR!
## What This Doesn't Do
Ansible NAS doesn't set up your disk partitions, primarily because getting it wrong can be incredibly destructive.
That aside, configuring partitions is usually a one-time (or very infrequent) event, so there's not much to be
gained by automating it. Check out the [docs](https://davestephens.github.io/ansible-nas) for recommended setups.
Ansible NAS doesn't set up your disk partitions, primarily because getting it wrong can be incredibly destructive. That aside, configuring partitions is usually a one-time (or very infrequent) event, so there's not much to be gained by automating it. Check out the [docs](https://davestephens.github.io/ansible-nas) for recommended setups.
## Installation
@ -122,8 +117,7 @@ See [Installation](https://davestephens.github.io/ansible-nas/installation/).
## Documentation
You can read the docs [here](https://davestephens.github.io/ansible-nas). PRs
for more documentation always welcome!
You can read the docs [here](https://davestephens.github.io/ansible-nas). PRs for more documentation always welcome!
## Migrating from FreeNAS
@ -138,10 +132,8 @@ Assuming that your Ubuntu system disk is separate from your storage (it should b
## Requirements
* Ansible NAS targets the latest Ubuntu LTS release, which is currently Ubuntu
Server 20.04 LTS.
* You can run Ansible-NAS on whatever you like, read the docs for more info. I
use an HP Microserver.
* Ansible NAS targets the latest Ubuntu LTS release, which is currently Ubuntu Server 20.04 LTS.
* You can run Ansible-NAS on whatever you like, read the docs for more info. I use an HP Microserver.
## Getting Help

View file

@ -1,6 +1,6 @@
# Glances
Homepage: [https://nicolargo.github.io/glances/](https://nicolargo.github.io/glances/)
Homepage: <https://nicolargo.github.io/glances/>
Glances is a cross-platform system monitoring tool written in Python.
@ -8,7 +8,7 @@ Glances is a cross-platform system monitoring tool written in Python.
Set `glances_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Glances web interface can be found at http://ansible_nas_host_or_ip:61208.
The Glances web interface can be found at <http://ansible_nas_host_or_ip:61208>.
## Specific Configuration

View file

@ -1,6 +1,6 @@
# Airsonic
Homepage: [https://airsonic.github.io/](https://airsonic.github.io/)
Homepage: <https://airsonic.github.io/>
Airsonic is a free, web-based media streamer, providing ubiquitous access to your music. Use it to share your music with friends, or to listen to your own music while at work. You can stream to multiple players simultaneously, for instance to one player in your kitchen and another in your living room
@ -8,7 +8,7 @@ Airsonic is a free, web-based media streamer, providing ubiquitous access to you
Set `airsonic_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Airsonic web interface can be found at http://ansible_nas_host_or_ip:4040.
The Airsonic web interface can be found at <http://ansible_nas_host_or_ip:4040>.
## Specific Configuration

View file

@ -1,11 +1,9 @@
# Bazarr subtitle downloader
Homepage: [https://github.com/morpheus65535/bazarr](https://github.com/morpheus65535/bazarr)
Homepage: <https://github.com/morpheus65535/bazarr>
Bazarr is a companion application to Sonarr and Radarr. It manages and downloads subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you.
## Usage
Set `bazarr_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.

View file

@ -1,7 +1,8 @@
# Bitwarden(_rs) Password Management
Homepage: [https://github.com/dani-garcia/bitwarden_rs](https://github.com/dani-garcia/bitwarden_rs)
Bitwarden: [https://bitwarden.com/](https://bitwarden.com/)
Homepage: <https://github.com/dani-garcia/bitwarden_rs>
Bitwarden: <https://bitwarden.com/>
This is a Bitwarden server API implementation written in Rust compatible with upstream Bitwarden clients*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
@ -11,10 +12,9 @@ Set `bitwarden_enabled: true` in your `inventories/<your_inventory>/nas.yml` fil
## Specific Configuration
Make sure you set your admin token! It is `bitwarden_admin_token` in `group_vars/all.yml` file. The string you put here will be the login to the admin section of your Bitwarden installation (https://bitwarden.ansiblenasdomain.tld/admin). This token can be anything, but it's recommended to use a long, randomly generated string of characters, for example running:
Make sure you set your admin token! It is `bitwarden_admin_token` in `group_vars/all.yml` file. The string you put here will be the login to the admin section of your Bitwarden installation (<https://bitwarden.ansiblenasdomain.tld/admin>). This token can be anything, but it's recommended to use a long, randomly generated string of characters, for example running:
`openssl rand -base64 48`.
To create a user, you need to set `bitwarden_allow_signups` to `true` in your `all.yml`, and re-run the playbook to reprovision the
container. Once you've created your users, set `bitwarden_allow_signups` back to `false` and run again.
To create a user, you need to set `bitwarden_allow_signups` to `true` in your `all.yml`, and re-run the playbook to reprovision the container. Once you've created your users, set `bitwarden_allow_signups` back to `false` and run again.
For speed you can target just Bitwarden by appending `-t bitwarden` to your `ansible-playbook` command.

View file

@ -1,6 +1,6 @@
# Booksonic
Homepage: [https://booksonic.org/](https://booksonic.org/)
Homepage: <https://booksonic.org/>
Stream your audiobooks to any pc or android phone. Most of the functionality is also available on other platforms that have apps for subsonic.
@ -12,7 +12,7 @@ Get the Android app on [Google Play](https://play.google.com/store/apps/details?
Set `booksonic_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Booksonic web interface can be found at http://ansible_nas_host_or_ip:4041.
The Booksonic web interface can be found at <http://ansible_nas_host_or_ip:4041>.
## Specific Configuration

View file

@ -1,7 +1,6 @@
# Calibre-web
Homepage: [https://github.com/janeczku/calibre-web](https://github.com/janeczku/calibre-web)
Homepage: <https://github.com/janeczku/calibre-web>
Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database.
@ -21,8 +20,8 @@ Requires Calibre ebook management program. Available for download [here](https:/
If you do not need eBook conversion you can disable it to save resources by setting the `calibre_ebook_conversion` variable in `group_vars/all.yml` file to be empty.
- Conversion enabled: `calibre_ebook_conversion: "linuxserver/calibre-web:calibre"`
- Conversion enabled: `calibre_ebook_conversion: "linuxserver/calibre-web:calibre"`
- Conversion disabled: `calibre_ebook_conversion: ""`
- Conversion disabled: `calibre_ebook_conversion: ""`
You can target just Calibre by appending `-t calibre` to your `ansible-playbook` command.

View file

@ -1,17 +1,14 @@
# Cloud Commander file manager
Homepage: [https://cloudcmd.io/](https://cloudcmd.io/)
Homepage: <https://cloudcmd.io/>
Cloud Commander is a file manager for the web. It includes a command-line console and a text editor. Cloud Commander helps you manage your server and work with files, directories and programs in a web browser from any computer, mobile or tablet.
## Usage
Set `cloudcmd_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
By default your the root of your Ansible-NAS box (`/`) is mounted into `/mnt/fs` within the container. If you'd like to
change this update `cloudcmd_browse_directory` in your `inventories/<your_inventory>/nas.yml` file.
By default your the root of your Ansible-NAS box (`/`) is mounted into `/mnt/fs` within the container. If you'd like to change this update `cloudcmd_browse_directory` in your `inventories/<your_inventory>/nas.yml` file.
If you enable external access to Cloud Commander (note that this is not recommended) then ensure you configure authorisation
within the application (F10 from the main menu).

View file

@ -1,7 +1,8 @@
# Cloudflare Dynamic DNS Updater
Homepage: [https://github.com/joshuaavalon/docker-cloudflare](https://github.com/joshuaavalon/docker-cloudflare)
Cloudflare: [https://www.cloudflare.com](https://www.cloudflare.com)
Homepage: <https://github.com/joshuaavalon/docker-cloudflare>
Cloudflare: <https://www.cloudflare.com>
If you want your Ansible-NAS accessible externally then you'll need a domain name. You'll also need to set a wildcard
host A record to point to your static IP, or enable this container to automatically update Cloudflare with your dynamic IP address.
@ -14,6 +15,6 @@ Set `cloudflare_token` to the one you grab from the Cloudflare UI (more below).
## Specific Configuration
Make sure you set your domain (if different than the ansible-nas default) and access token details within your `inventories/<your_inventory>/nas.yml` file. If you need to create an API token, see https://github.com/joshuaavalon/docker-cloudflare/#api-token for instructions.
Make sure you set your domain (if different than the ansible-nas default) and access token details within your `inventories/<your_inventory>/nas.yml` file. If you need to create an API token, see [https://github.com/joshuaavalon/docker-cloudflare/#api-token](https://github.com/joshuaavalon/docker-cloudflare/#api-token) for instructions.
Cloudflare has deprecated global API key authentication. If you have an older ansible-nas configuration using a global API key, you can upgrade to the API token-based authentication by removing the `cloudflare_api_key` variable from your local `nas.yml` configuration file and setting the `cloudflare_token` variable appropriately.

View file

@ -1,7 +1,7 @@
# CouchPotato
Homepage: [https://couchpota.to/](https://couchpota.to/)
Homepage: <https://couchpota.to/>
CouchPotato enables you to download movies automatically, easily and in the best quality as soon as they are available.
@ -9,4 +9,4 @@ CouchPotato enables you to download movies automatically, easily and in the best
Set `couchpotato_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The CouchPotato web interface can be found at http://ansible_nas_host_or_ip:5050.
The CouchPotato web interface can be found at <http://ansible_nas_host_or_ip:5050>.

View file

@ -1,7 +1,7 @@
# Dashy
Homepage: [https://dashy.to/](https://dashy.to/)
Homepage: <https://dashy.to/>
Dashy is an open source, highly customizable, easy to use, privacy-respecting dashboard app.
It's packed full of useful features, to help you build your perfect dashboard. Including status checks, keyboard shortcuts, dynamic widgets, auto-fetched favicon icons and font-awesome support, built-in authentication, tons of themes, an interactive config editor, many display layouts plus loads more.
@ -11,4 +11,4 @@ All the code is free and open source, and everything is thoroughly documented, y
Set `dashy_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Dashy web interface can be found at http://ansible_nas_host_or_ip:8082.
The Dashy web interface can be found at <http://ansible_nas_host_or_ip:8082>.

View file

@ -1,7 +1,9 @@
# Deluge
[Deluge](http://deluge-torrent.org/) is a lightweight, Free Software, cross-platform BitTorrent client.
<img align="right" width="200" height="200" src="https://avatars2.githubusercontent.com/u/6733935?v=3&s=200">
Homepage: <http://deluge-torrent.org/>
Deluge is a lightweight, Free Software, cross-platform BitTorrent client.
* Full Encryption
* WebUI
* Plugin System
@ -11,8 +13,8 @@
Set `deluge_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
Deluge's web interface can be found at http://ansible_nas_host_or_ip:8112
Deluge's web interface can be found at <http://ansible_nas_host_or_ip:8112>
Upon first viewing you will be prompted for a password. The default is `deluge` It is recommended that you change this password in the preferences menu.
**For more info visit: [https://dev.deluge-torrent.org/] & [https://github.com/linuxserver/docker-deluge/blob/master/README.md]
**For more info visit: <https://dev.deluge-torrent.org/> & <https://github.com/linuxserver/docker-deluge/blob/master/README.md>

View file

@ -1,6 +1,6 @@
# DokuWiki
Homepage: [https://www.dokuwiki.org/](https://www.dokuwiki.org/)
Homepage: <https://www.dokuwiki.org/>
DokuWiki is a simple to use and highly versatile Open Source wiki software that doesn't require a database. It is loved by users for its clean and readable syntax. The ease of maintenance, backup and integration makes it an administrator's favorite. Built in access controls and authentication connectors make DokuWiki especially useful in the enterprise context and the large number of plugins contributed by its vibrant community allow for a broad range of use cases beyond a traditional wiki.
@ -8,4 +8,4 @@ DokuWiki is a simple to use and highly versatile Open Source wiki software that
Set `dokuwiki_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The DokuWiki web interface can be found at http://ansible_nas_host_or_ip:8085.
The DokuWiki web interface can be found at <http://ansible_nas_host_or_ip:8085>.

View file

@ -1,6 +1,6 @@
# Duplicacy Cloud Backup
Homepage: [https://duplicacy.com/](https://duplicacy.com/)
Homepage: <https://duplicacy.com/>
Duplicacy is a next-generation, cross-platform, cloud backup tool. Duplicacy backs up your files to many cloud storages with client-side encryption and the highest level of deduplication.
@ -8,7 +8,7 @@ Duplicacy is a next-generation, cross-platform, cloud backup tool. Duplicacy bac
Set `duplicacy_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
Duplicacy's web interface can be found at http://ansible_nas_host_or_ip:3875.
Duplicacy's web interface can be found at <http://ansible_nas_host_or_ip:3875>.
## Specific Configuration

View file

@ -1,7 +1,7 @@
# Duplicati
Homepage: [https://www.duplicati.com/](https://www.duplicati.com/)
Homepage: <https://www.duplicati.com/>
Duplicati is free backup software to store encrypted backups online For Windows, macOS and Linux
@ -9,4 +9,4 @@ Duplicati is free backup software to store encrypted backups online For Windows,
Set `duplicati_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Duplicati web interface can be found at http://ansible_nas_host_or_ip:8200.
The Duplicati web interface can be found at <http://ansible_nas_host_or_ip:8200>.

View file

@ -1,6 +1,6 @@
# Emby
Homepage: [https://emby.media/](https://emby.media/)
Homepage: <https://emby.media/>
Emby is a mostly open-source media server with a client-server model. This
install for Ansible-NAS provides a server, which various clients can then
@ -17,13 +17,12 @@ parameters you can edit such as `movies_root` and `tv_root` lower down.
## Specific Configuration
The emby web interface can be found at port 8096 (http) or 8920 (https, if
configured) of your NAS. Heimdall has a dedicated icon for emby.
configured) of your NAS. Heimdall has a dedicated icon for emby
By default, Ansible-NAS gives emby read/write access to the folders where your
movies and TV shows are stored. To change this to read-only, edit the following
lines in `all.yml`:
```
```yaml
emby_movies_permissions: "rw"
emby_tv_permissions: "rw"
```
@ -31,7 +30,7 @@ lines in `all.yml`:
so that they end in `ro` instead of `rw`. Note that emby will not be able to
delete files then, which might be exactly what you want. However, you will not
have the option to store cover art in the related folders. Always leave the
configuration directory read/write.
configuration directory read/write
## File system considerations
@ -40,7 +39,6 @@ are using a specialized filesystem such as ZFS for bulk storage, you'll want to
set the parameters accordingly. The [ZFS configuration
documentation](../zfs/zfs_configuration.md) has an example of this.
## Naming movies and TV shows
Emby is very fussy about how movies and TV shows must be named to enable
@ -48,18 +46,17 @@ automatic downloads of cover art and metadata. In short, movie files should
follow how movies are listed in the [IMDb](https://www.imdb.com/), including the
year of publication:
```
```raw
movies/Bride of Frankenstein (1935).mp4
```
Note the spaces. You should probably remove colons and other special characters.
Note the spaces. You should probably remove colons and other special characters
TV shows require a folder structure with the name of the series - again if
possible with the year of publication - followed by sub-folders for the
individual seasons. For example, the first episode of the first season of
the original "Doctor Who" could be stored as:
```
```raw
tv/Doctor Who (1963)/Season 1/Doctor Who - s01e01.mp4
```
@ -71,4 +68,3 @@ movies and older series. See the [movie
naming](https://github.com/MediaBrowser/Wiki/wiki/Movie%20naming) and [TV
naming](https://github.com/MediaBrowser/Wiki/wiki/TV-naming) guides for further
information.

View file

@ -2,12 +2,11 @@
Homepage: [esphome.io](https://esphome.io/)
ESPHome is a system to control your ESP8266/ESP32 by simple yet powerful configuration files and control them remotely through Home Automation systems.
ESPHome is a system to control your ESP8266/ESP32 by simple yet powerful configuration files and control them remotely through Home Automation systems
## Usage
Set `esphome_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
You can make esphome externally available, but the program has no security so this is strongly not advised.
The EspHome web interface can be found at http://ansible_nas_host_or_ip:6052.
You can make esphome externally available, but the program has no security so this is strongly not advised
The EspHome web interface can be found at <http://ansible_nas_host_or_ip:6052>.

View file

@ -1,11 +1,11 @@
# Firefly III
Homepage: [https://firefly-iii.org/](https://firefly-iii.org/)
Homepage: <https://firefly-iii.org/>
Firefly III is a self-hosted financial manager. It can help you keep track of expenses, income, budgets and everything in between. It supports credit cards, shared household accounts and savings accounts. Its pretty fancy. You should use it to save and organise money.
Firefly III is a self-hosted financial manager. It can help you keep track of expenses, income, budgets and everything in between. It supports credit cards, shared household accounts and savings accounts. It's pretty fancy. You should use it to save and organize money.
## Usage
Set `firefly_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Firefly III web interface can be found at http://ansible_nas_host_or_ip:8066.
The Firefly III web interface can be found at <http://ansible_nas_host_or_ip:8066>.

View file

@ -1,7 +1,7 @@
# FreshRSS
Homepage: [https://freshrss.org/](https://freshrss.org/)
Homepage: <https://freshrss.org/>
FreshRSS is a self-hosted RSS feed aggregator like Leed or Kriss Feed.
@ -19,4 +19,4 @@ Finally, it supports extensions for further tuning.
Set `freshrss_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The FreshRSS web interface can be found at http://ansible_nas_host_or_ip:8089.
The FreshRSS web interface can be found at <http://ansible_nas_host_or_ip:8089>.

View file

@ -1,6 +1,6 @@
# get_iplayer
Homepage: [https://github.com/get-iplayer/get_iplayer](https://github.com/get-iplayer/get_iplayer)
Homepage: <https://github.com/get-iplayer/get_iplayer>
Downloads TV and radio programmes from BBC iPlayer.
@ -8,4 +8,4 @@ Downloads TV and radio programmes from BBC iPlayer.
Set `get_iplayer_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The get_iplayer web interface can be found at http://ansible_nas_host_or_ip:8182.
The get_iplayer web interface can be found at <http://ansible_nas_host_or_ip:8182>.

View file

@ -1,7 +1,7 @@
# Gitea
Homepage: [https://gitea.io/](https://gitea.io/)
Homepage: <https://gitea.io/>
Gitea is a painless self-hosted Git service.
@ -9,4 +9,4 @@ Gitea is a painless self-hosted Git service.
Set `gitea_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Gitea web interface can be found at http://ansible_nas_host_or_ip:3001.
The Gitea web interface can be found at <http://ansible_nas_host_or_ip:3001>.

View file

@ -1,6 +1,6 @@
# GitLab
Homepage: [https://docs.gitlab.com/omnibus/docker/](https://docs.gitlab.com/omnibus/docker/)
Homepage: <https://docs.gitlab.com/omnibus/docker/>
If Gitea isn't powerful enough for you then consider GitLab. It's a much more powerful (and consequently bigger) Git repository solution that includes a suite of code analytics. On the other hand it requires more RAM.
@ -11,4 +11,3 @@ Set `gitlab_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
To make GitLab available externally via Traefik set `gitlab_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The first time you run GitLab you'll be prompted for an account's password. The password is for GitLab's `root` administrator account. From there you can log in to create additional users and further configure the application.

View file

@ -1,6 +1,6 @@
# Gotify
Homepage: [https://gotify.net/](https://gotify.net/)
Homepage: <https://gotify.net/>
A simple server for sending and receiving messages in real-time per WebSocket. (Includes a sleek web-ui)
@ -8,9 +8,9 @@ A simple server for sending and receiving messages in real-time per WebSocket. (
Set `gotify_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Gotify web interface can be found at http://ansible_nas_host_or_ip:2346.
The Gotify web interface can be found at <http://ansible_nas_host_or_ip:2346>.
Android client: [https://play.google.com/store/apps/details?id=com.github.gotify](https://play.google.com/store/apps/details?id=com.github.gotify)
iOS client: n/a
Chrome extension: n/a
Firefox extension: [https://addons.mozilla.org/en-US/firefox/addon/gotify-for-firefox/](https://addons.mozilla.org/en-US/firefox/addon/gotify-for-firefox/)
Firefox extension: <https://addons.mozilla.org/en-US/firefox/addon/gotify-for-firefox/>

View file

@ -1,6 +1,6 @@
# Guacamole
Homepage: [hhttps://guacamole.apache.org/](https://guacamole.apache.org/)
Homepage: <https://guacamole.apache.org/>
Apache Guacamole is a clientless remote desktop gateway. It supports standard protocols like VNC, RDP, and SSH.

View file

@ -1,11 +1,11 @@
# Healthchecks.io
Homepage: [https://healthchecks.io/](https://healthchecks.io/)
Homepage: <https://healthchecks.io/>
A simple cronjob that uses `curl` to ping a given endpoint on the `healthchecks.io` servers. You can choose how often it should ping the endpoint, and what happens when it doesn't. Email/Slack/Telegram and many more services can be integrated.
## Usage
Create your own project on [https://healthchecks.io/](https://healthchecks.io/), and set both the time between pings and the grace time. Set your prefered integration such as email.
Create your own project on <https://healthchecks.io/>, and set both the time between pings and the grace time. Set your prefered integration such as email.
Set `healthchecks_enabled: true` in your `inventories/<your_inventory>/nas.yml` file, and if your time between pings is different than the default `healthchecks_ping_minutes`, change it. Finally, set your ping url in the `healthchecks_url` variable.

View file

@ -1,15 +1,15 @@
# Heimdall
Homepage: [https://heimdall.site/](https://heimdall.site/)
Homepage: <https://heimdall.site/>
Heimdall Application Dashboard is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
Heimdall Application Dashboard is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like
## Usage
Set `heimdall_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Heimdall web interface can be found at http://ansible_nas_host_or_ip:10080.
The Heimdall web interface can be found at <http://ansible_nas_host_or_ip:10080>.
## Specific Configuration

View file

@ -1,6 +1,6 @@
# Home Assistant
Homepage: [https://www.home-assistant.io/](https://www.home-assistant.io/)
Homepage: <https://www.home-assistant.io/>
Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
@ -10,4 +10,4 @@ Set `homeassistant_enabled: true` in your `inventories/<your_inventory>/nas.yml`
If you want to access Home Assistant externally, don't forget to set `homeassistant_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The Home Assistant web interface can be found at http://ansible_nas_host_or_ip:8123.
The Home Assistant web interface can be found at <http://ansible_nas_host_or_ip:8123>.

View file

@ -1,6 +1,6 @@
# Homebridge
Homepage: [https://github.com/nfarina/homebridge](https://github.com/nfarina/homebridge)
Homepage: <https://github.com/nfarina/homebridge>
Homebridge is a lightweight NodeJS server you can run on your home network that emulates the iOS HomeKit API. It supports Plugins, which are community-contributed modules that provide a basic bridge from HomeKit to various 3rd-party APIs provided by manufacturers of "smart home" devices.
@ -8,4 +8,4 @@ Homebridge is a lightweight NodeJS server you can run on your home network that
Set `homebridge_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Homebridge web interface can be found at http://ansible_nas_host_or_ip:8087. The default username and password is 'admin' - change this after your first login!
The Homebridge web interface can be found at <http://ansible_nas_host_or_ip:8087>. The default username and password is 'admin' - change this after your first login!

View file

@ -1,6 +1,6 @@
# Jackett
Homepage: [https://github.com/Jackett/Jackett](https://github.com/Jackett/Jackett)
Homepage: <https://github.com/Jackett/Jackett>
Jackett works as a proxy server: it translates queries from apps (Sonarr, Radarr, SickRage, CouchPotato, Mylar, DuckieTV, qBittorrent, Nefarious etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software. This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps.
@ -8,4 +8,4 @@ Jackett works as a proxy server: it translates queries from apps (Sonarr, Radarr
Set `jackett_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Jackett web interface can be found at http://ansible_nas_host_or_ip:9117.
The Jackett web interface can be found at <http://ansible_nas_host_or_ip:9117>.

View file

@ -1,6 +1,6 @@
# Jellyfin
Homepage: [https://jellyfin.github.io/](https://jellyfin.github.io/)
Homepage: <https://jellyfin.github.io/>
Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps. Jellyfin is descended from Emby's 3.5.2 release and ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it. We welcome anyone who is interested in joining us in our quest!
@ -10,7 +10,7 @@ similar functionality.
## Usage
Set `jellyfin_enabled: true` in your `inventories/<your_inventory>/nas.yml` file. There are further
parameters you can edit such as `movies_root`, `tv_root` or `music_root` lower down.
parameters you can edit such as `movies_root`, `tv_root` or `music_root` lower down
## Specific Configuration
@ -21,7 +21,7 @@ By default, Ansible-NAS gives jellyfin read/write access to the folders where yo
movies, TV shows and music are stored. To change this to read-only, edit the following
lines in `all.yml`:
```
```yaml
jellyfin_movies_permissions: "rw"
jellyfin_tv_permissions: "rw"
jellyfin_books_permissions: "rw"
@ -32,7 +32,7 @@ lines in `all.yml`:
so that they end in `ro` instead of `rw`. Note that jellyfin will not be able to
delete files then, which might be exactly what you want. However, you will not
have the option to store cover art in the related folders. Always leave the
configuration directory read/write.
configuration directory read/write
## File system considerations
@ -41,7 +41,6 @@ are using a specialized filesystem such as ZFS for bulk storage, you'll want to
set the parameters accordingly. The [ZFS configuration
documentation](../zfs/zfs_configuration.md) has an example of this.
## Naming movies and TV shows
jellyfin is very fussy about how movies and TV shows must be named to enable
@ -49,18 +48,17 @@ automatic downloads of cover art and metadata. In short, movie files should
follow how movies are listed in the [IMDb](https://www.imdb.com/), including the
year of publication:
```
```raw
movies/Bride of Frankenstein (1935).mp4
```
Note the spaces. You should probably remove colons and other special characters.
Note the spaces. You should probably remove colons and other special characters
TV shows require a folder structure with the name of the series - again if
possible with the year of publication - followed by sub-folders for the
individual seasons. For example, the first episode of the first season of
the original "Doctor Who" could be stored as:
```
```raw
tv/Doctor Who (1963)/Season 1/Doctor Who - s01e01.mp4
```

View file

@ -1,6 +1,6 @@
# Joomla CMS
Homepage: [https://www.joomla.org/](https://www.joomla.org/)
Homepage: <https://www.joomla.org/>
Joomla! is an award-winning content management system (CMS), which enables you to build web sites and powerful online applications.
@ -10,13 +10,13 @@ Set `joomla_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
If you want to access Joomla externally, set `joomla_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The Joomla web interface can be found at http://ansible_nas_host_or_ip:8181.
The Joomla web interface can be found at <http://ansible_nas_host_or_ip:8181>.
## Specific Configuration
- Set `joomla_database_password` in your `all.yml` before installing Joomla.
- Set `joomla_database_password` in your `all.yml` before installing Joomla.
- On first run you'll need to enter database details:
- On first run you'll need to enter database details:
- Host: `mysql`
- Database: `joomla`
- Username: `root`

View file

@ -1,7 +1,8 @@
# Komga free and open source comics/mangas media server
Homepage: [https://komga.org/](https://komga.org/)
Docker Image: [https://hub.docker.com/r/gotson/komga](https://hub.docker.com/r/gotson/komga)
Homepage: <https://komga.org/>
Docker Image: <https://hub.docker.com/r/gotson/komga>
Komga is a media server for your comics, mangas, BDs and magazines.
@ -9,4 +10,4 @@ Komga is a media server for your comics, mangas, BDs and magazines.
Set `komga_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
Access the webui at http://<server>:8088 by default.
Access the webui at <http://ansible_nas_host_or_ip:8088> by default.

View file

@ -1,7 +1,8 @@
# Krusader
Homepage: [https://krusader.org/](https://krusader.org/)
Homepage: <https://krusader.org/>
Docker Container: [Krusader](https://hub.docker.com/r/djaydev/krusader)
Krusader provides twin panel file management for your ansible-nas via browser and VNC.
@ -10,4 +11,4 @@ Krusader provides twin panel file management for your ansible-nas via browser an
Set `krusader_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Krusader web interface can be found at http://ansible_nas_host_or_ip:5800.
The Krusader web interface can be found at <http://ansible_nas_host_or_ip:5800>.

View file

@ -1,12 +1,9 @@
# Lidarr music collection manager
Homepage: [https://lidarr.audio/](https://lidarr.audio/)
Homepage: <https://lidarr.audio/>
Lidarr is a music collection manager for Usenet and BitTorrent users. It can monitor multiple RSS feeds for new tracks from your favorite artists and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available.
## Usage
Set `lidarr_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.

View file

@ -1,6 +1,6 @@
# Mealie
Homepage: [https://docs.mealie.io/](https://docs.mealie.io/)
Homepage: <https://docs.mealie.io/>
A self-hosted recipe manager and meal planner with a RestAPI backend and a reactive frontend application built in Vue for a pleasant user experience for the whole family.
@ -8,4 +8,4 @@ A self-hosted recipe manager and meal planner with a RestAPI backend and a react
Set `mealie_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Mealie web interface can be found at http://ansible_nas_host_or_ip:9925.
The Mealie web interface can be found at <http://ansible_nas_host_or_ip:9925>.

View file

@ -1,6 +1,6 @@
# Minecraft Server
Homepage: [https://www.minecraft.net/](https://www.minecraft.net/)
Homepage: <https://www.minecraft.net/>
The server version of the game Minecraft, running in a container. "Prepare for an adventure of limitless possibilities as you build, mine, battle mobs, and explore the ever-changing Minecraft landscape."

View file

@ -1,6 +1,6 @@
# MiniDLNA
Homepage: [https://sourceforge.net/projects/minidlna/](https://sourceforge.net/projects/minidlna/)
Homepage: <https://sourceforge.net/projects/minidlna/>
MiniDLNA is server software with the aim of being fully compliant with DLNA/UPnP clients. The MiniDNLA daemon serves media files (music, pictures, and video) to clients on a network. Example clients include applications such as Totem and Kodi, and devices such as portable media players, Smartphones, Televisions, and gaming systems (such as PS3 and Xbox 360).
@ -8,4 +8,4 @@ MiniDLNA is server software with the aim of being fully compliant with DLNA/UPnP
Set `minidlna_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The very basic MiniDLNA web interface can be found at http://ansible_nas_host_or_ip:8201.
The very basic MiniDLNA web interface can be found at <http://ansible_nas_host_or_ip:8201>.

View file

@ -1,6 +1,6 @@
# Miniflux
Homepage: [https://miniflux.app/](https://miniflux.app/)
Homepage: <https://miniflux.app/>
Miniflux is a minimalist and opinionated feed reader.
@ -8,7 +8,7 @@ Miniflux is a minimalist and opinionated feed reader.
Set `miniflux_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Miniflux web interface can be found at http://ansible_nas_host_or_ip:8070, the default username is `admin` and password `supersecure`.
The Miniflux web interface can be found at <http://ansible_nas_host_or_ip:8070>, the default username is `admin` and password `supersecure`.
## Specific Configuration

View file

@ -1,10 +1,9 @@
# Mosquitto
Homepage: [https://mosquitto.org](https://mosquitto.org)
Homepage: <https://mosquitto.org>
Mosquitto is a lightweight open source MQTT message broker.
## Usage
Set `mosquitto_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.

View file

@ -1,9 +1,9 @@
# Mylar
Homepage: [https://github.com/evilhero/mylar](https://github.com/evilhero/mylar)
Homepage: <https://github.com/evilhero/mylar>
Docker Container: [https://hub.docker.com/r/linuxserver/mylar](https://hub.docker.com/r/linuxserver/mylar)
Docker Container: <https://hub.docker.com/r/linuxserver/mylar>
An automated Comic Book downloader (cbr/cbz) for use with SABnzbd, NZBGet and torrents
@ -13,4 +13,4 @@ Set `mylar_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
If you want to access Mylar externally, don't forget to set `mylar_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The Mylar web interface can be found at http://ansible_nas_host_or_ip:5858.
The Mylar web interface can be found at <http://ansible_nas_host_or_ip:5858>.

View file

@ -1,6 +1,6 @@
# My Media for Alexa
Homepage: [https://www.mymediaalexa.com/](https://www.mymediaalexa.com/)
Homepage: <https://www.mymediaalexa.com/>
My Media lets you stream your music collection to your Amazon Echo or Amazon Dot without having to upload all your music collection to the Cloud. This keeps your music under your control.
@ -8,4 +8,4 @@ My Media lets you stream your music collection to your Amazon Echo or Amazon Dot
Set `mymediaforalexa_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The My Media for Alexa web interface can be found at http://ansible_nas_host_or_ip:52051.
The My Media for Alexa web interface can be found at <http://ansible_nas_host_or_ip:52051>.

View file

@ -1,16 +1,18 @@
# Nodemation (n8n)
Homepage: <https://n8n.io>
Extendable workflow automation tool that enables you to connect anything to everything. More pragrmatically, it helps you interconnect API with each other to build your own information / work flows.
Homepage: [https://n8n.io](https://n8n.io)
## Usage
Set `n8n_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
n8n is secured by default, he user and password can be set with:
* n8n_basic_auth_user: "<user name>"
* n8n_basic_auth_password: "<user password>"
```yaml
n8n_basic_auth_user: "<user name>"
n8n_basic_auth_password: "<user password>"
```
The default for these is "n8n_user" and "n8n_change_me" respectively, it is recommended to change these.

View file

@ -1,6 +1,6 @@
# Navidrome
Homepage: [https://www.navidrome.org/](https://www.navidrome.org/)
Homepage: <https://www.navidrome.org/>
Navidrome is an open source web-based music collection server and streamer that is compatible with Subsonic/Airsonic. It gives you freedom to listen to your music collection from any browser or mobile device. It's like your personal Spotify!
@ -8,4 +8,4 @@ Navidrome is an open source web-based music collection server and streamer that
Set `navidrome_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Navidrome web interface can be found at http://ansible_nas_host_or_ip:4533.
The Navidrome web interface can be found at <http://ansible_nas_host_or_ip:4533>.

View file

@ -1,8 +1,9 @@
# netboot.xyz
Homepage: [https://netboot.xyz/](https://netboot.xyz/)
Docker Container: [https://hub.docker.com/r/linuxserver/netbootxyz](https://hub.docker.com/r/linuxserver/netbootxyz)
Homepage: <https://netboot.xyz/>
Docker Container: <https://hub.docker.com/r/linuxserver/netbootxyz>
netboot.xyz is a way to PXE boot various operating system installers or utilities from one place within the BIOS without the need of having to go retrieve the media to run the tool. [iPXE](https://ipxe.org/) is used to provide a user friendly menu from within the BIOS that lets you easily choose the operating system you want along with any specific types of versions or bootable flags.
@ -12,4 +13,4 @@ You can remote attach the ISO to servers, set it up as a rescue option in Grub,
Set `netbootxyz_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The netbooxyz web interface can be found at http://ansible_nas_host_or_ip:3002.
The netbooxyz web interface can be found at <http://ansible_nas_host_or_ip:3002>.

View file

@ -1,7 +1,6 @@
# Nextcloud
Homepage: [https://nextcloud.com](https://nextcloud.com)
Homepage: <https://nextcloud.com>
## Usage

View file

@ -1,6 +1,6 @@
# NZBget
Homepage: [https://nzbget.net/](https://nzbget.net/)
Homepage: <https://nzbget.net/>
The most efficient Usenet downloader. NZBGet is written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources.
@ -8,4 +8,4 @@ The most efficient Usenet downloader. NZBGet is written in C++ and designed with
Set `nzbget_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The NZBget web interface can be found at http://ansible_nas_host_or_ip:6789, the default username is `nzbget` and password `tegbzn6789`. Change this once you've logged in!
The NZBget web interface can be found at <http://ansible_nas_host_or_ip:6789>, the default username is `nzbget` and password `tegbzn6789`. Change this once you've logged in!

View file

@ -1,6 +1,6 @@
# Octoprint
Homepage: [https://octoprint.org/](https://octoprint.org/)
Homepage: <https://octoprint.org/>
Octoprint is a control and monitoring application for your 3D printer. You can start and stop print jobs, view your webcam feed, move the print head and extruder manually and check your gcode files, all from a single web interface. Octoprint doesn't require modifications on the printer firmware, just make sure your NAS is phisically connected to it with a usb cable.

View file

@ -1,12 +1,9 @@
# Ombi
Homepage: [https://ombi.io/](https://ombi.io/)
Homepage: <https://ombi.io/>
Ombi is a self-hosted web application that automatically gives your shared Plex or Emby users the ability to request content by themselves! Ombi can be linked to multiple TV Show and Movie DVR tools to create a seamless end-to-end experience for your users.
## Usage
Set `ombi_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.

View file

@ -1,6 +1,6 @@
# openHAB
Homepage: [https://www.openhab.org/](https://www.openhab.org/)
Homepage: <https://www.openhab.org/>
OpenHab is a vendor and technology agnostic open source automation software for your home.
It allows you to connect many different IoT-Devices (which in this case means "Intranet of Things") using custom bindings made by the community.

View file

@ -1,13 +1,16 @@
# Organizr
Homepage: [https://organizr.app/](https://organizr.app/)
Homepage: <https://organizr.app/>
ORGANIZR aims to be your one stop shop for your Servers Frontend.
Do you have quite a bit of services running on your computer or server? Do you have a lot of bookmarks or have to memor$
TODO: finish this truncated description
## Usage
Set `organizr_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Organizr web interface can be found at http://ansible_nas_host_or_ip:10081.
The Organizr web interface can be found at <http://ansible_nas_host_or_ip:10081>.

View file

@ -1,12 +1,13 @@
# overseerr
Homepage: [https://docs.overseerr.dev](https://docs.overseerr.dev)
Docker Container: [https://hub.docker.com/r/sctx/overseerr](https://hub.docker.com/r/sctx/overseerr)
Homepage: <https://docs.overseerr.dev>
Docker Container: <https://hub.docker.com/r/sctx/overseerr>
Overseerr is a free and open source software application for managing requests for your media library. It integrates with your existing services, such as Sonarr, Radarr, and Plex!
## Usage
## Usage
Using overseerr: Set `overseerr_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The overseerr web interface can be found at http://ansible_nas_host_or_ip:5055.
The overseerr web interface can be found at <http://ansible_nas_host_or_ip:5055>.

View file

@ -1,6 +1,6 @@
# Paperless-ng
Homepage: [https://github.com/jonaswinkler/paperless-ng](https://github.com/jonaswinkler/paperless-ng)
Homepage: <https://github.com/jonaswinkler/paperless-ng>
Paperless is an application by Daniel Quinn and contributors that indexes your scanned documents and allows you to easily search for documents and store metadata alongside your documents.
@ -10,7 +10,7 @@ Paperless-ng is a fork of the original project, adding a new interface and many
Set `paperless_ng_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The paperless-ng web interface can be found at http://ansible_nas_host_or_ip:16922.
The paperless-ng web interface can be found at <http://ansible_nas_host_or_ip:16922>.
### Create the superuser

View file

@ -10,20 +10,19 @@ Set `piwigo_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
If you want to access Piwigo externally, set `piwigo_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The Piwigo web interface can be found at http://ansible_nas_host_or_ip:16923.
The Piwigo web interface can be found at <http://ansible_nas_host_or_ip:16923>.
## Specific Configuration
Optional configurations:
- Set `piwigo_mysql_user` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- Set `piwigo_mysql_password` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- Set `piwigo_mysql_root_password` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- Set `piwigo_mysql_user` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- Set `piwigo_mysql_password` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- Set `piwigo_mysql_root_password` in `inventories/<your_inventory>/group_vars/nas.yml` before installing Piwigo, this defaults to "piwigo".
- On first run you'll need to enter database details:
- On first run you'll need to enter database details:
- Host: `db:3306`
- Username: the value of piwigo_mysql_user, defaults to "piwigo"
- Password: the value of piwigo_password, defaults to "piwigo"
- Database Name: `piwigo`
- Database tables prefix: should be prefilled with `piwigo_`

View file

@ -1,6 +1,6 @@
# Plex
Homepage: [https://www.plex.tv/](https://www.plex.tv/)
Homepage: <https://www.plex.tv/>
Plex is a personal media server that also provides access to several external movie, web show, and podcast services. Allows you to stream music too. Apps for many devices, including e.g. chromecast integration.
@ -8,7 +8,7 @@ Plex is a personal media server that also provides access to several external mo
Set `plex_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Plex web interface can be found at http://ansible_nas_host_or_ip:32400/web/index.html.
The Plex web interface can be found at <http://ansible_nas_host_or_ip:32400/web/index.html>.
## Specific Configuration

View file

@ -1,4 +1,5 @@
# Prowlarr
Homepages: [prowlarr](https://github.com/Prowlarr/Prowlarr)
**Prowlarr** is a indexer manager/proxy built on the popular arr .net/reactjs base stack to integrate with your various PVR apps. Prowlarr supports both Torrent Trackers and Usenet Indexers. It integrates seamlessly with Sonarr, Radarr, Lidarr, and Readarr offering complete management of your indexers with no per app Indexer setup required (we do it all).
@ -9,7 +10,6 @@ Set `prowlarr_enabled: true` in your `/inventories/[my inventory]/group_vars/nas
The Prowlarr web interface can be found at `http://ansible_nas_host_or_ip:9696` by default
## Specific Configuration
For comprehensive configuration instructions see the [Prowlarr wiki](https://wiki.servarr.com/prowlarr) or [Prowlarr Github page](https://github.com/Prowlarr/Prowlarr)

View file

@ -2,16 +2,15 @@
Homepage: [https://pyload.net/](https://pyload.net//)
Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web
.
Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web.
## Usage
Set `pyload_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
pyLoad's web interface can be found at http://ansible_nas_host_or_ip:8000
pyLoad's web interface can be found at <http://ansible_nas_host_or_ip:8000>.
## Specific Configuration
Default username is `pyload` and default password is `pyload`.
Default username is `pyload` and default password is `pyload`
In order to add or remove users, you will need to access the container from an interactive shell (can easily be done from portainer, if installed) and enter pyLoad's home directory `/opt/pyload` and using the command `python pyLoadCore.py -u` and follow the on-screen prompts. More commands to configure and customize pyLoad can be found on it's website.

View file

@ -2,22 +2,26 @@
# PyTivo
Project Homepage:
[https://github.com/lucasnz/pytivo](https://github.com/lucasnz/pytivo)
<https://github.com/lucasnz/pytivo>
Docker Homepage:
[https://hub.docker.com/r/pinion/docker-pytivo](https://hub.docker.com/r/pinion/docker-pytivo)
<https://hub.docker.com/r/pinion/docker-pytivo>
PyTivo is both an HMO and GoBack server. Similar to TiVo Desktop pyTivo
loads many standard video compression codecs and outputs mpeg2 video to
the TiVo. However, pyTivo is able to load MANY more file types than TiVo
Desktop. [http://pytivo.org/](http://pytivo.org/)
Desktop. <http://pytivo.org/>
## Usage
Set `pytivo_enabled: true` in your `group_vars/all.yml` file. The PyTivo
web interface can be found at http://ansible_nas_host_or_ip:9032.
web interface can be found at <http://ansible_nas_host_or_ip:9032>.
## Specific Configuration
PyTivo needs to be configured for use. Your ansible-nas media is
available to share via:
* /movies - Where your movies are stored
* /music - Where your music is stored
* /photos - Where your photos are stored

View file

@ -1,4 +1,5 @@
# Radarr
Homepage: [radarr](https://radarr.video/)
**Radarr** is an independent fork of Sonarr reworked for automatically downloading movies via Usenet and BitTorrent.

View file

@ -1,8 +1,8 @@
# AWS Route53 Dynamic DNS Updater
ddns-route53: [https://crazymax.dev/ddns-route53/](https://crazymax.dev/ddns-route53/)
ddns-route53: <https://crazymax.dev/ddns-route53/>
AWS Route53: [https://aws.amazon.com/route53/](https://aws.amazon.com/route53/)
AWS Route53: <https://aws.amazon.com/route53/>
If you want your Ansible-NAS accessible externally then you need a domain name. You will also need to set a wildcard host `A` record to point to your static IP, or enable this container to automatically update AWS Route53 with your dynamic IP address.

View file

@ -1,7 +1,7 @@
# RSS-Bridge
Homepage: [https://rss-bridge.github.io/rss-bridge/](https://rss-bridge.github.io/rss-bridge/)
Homepage: <https://rss-bridge.github.io/rss-bridge/>
RSS-Bridge is a PHP project capable of generating RSS and Atom feeds for websites that don't have one. It can be used on webservers or as a stand-alone application in CLI mode.
@ -11,4 +11,4 @@ Important: RSS-Bridge is not a feed reader or feed aggregator, but a tool to gen
Set `rssbridge_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The RSS-Bridge web interface can be found at http://ansible_nas_host_or_ip:8091.
The RSS-Bridge web interface can be found at <http://ansible_nas_host_or_ip:8091>.

View file

@ -1,6 +1,6 @@
# Sabnzbd
Homepage: [https://sabnzbd.org/)
Homepage: <https://sabnzbd.org/>
The time tested Usenet downloader provided with FreeNAS. It just works for those migrating from FreeNAS.
@ -8,4 +8,4 @@ The time tested Usenet downloader provided with FreeNAS. It just works for those
Set `sabnzbd_enabled: true` in your `/inventories/[my inventory]/group_vars/nas.yml` file.
The Sabnzbd web interface can be found at http://ansible_nas_host_or_ip:18080. Use this interface to configure the software upon first connection.
The Sabnzbd web interface can be found at <http://ansible_nas_host_or_ip:18080>. Use this interface to configure the software upon first connection.

View file

@ -1,5 +1,6 @@
# Sonarr
Homepages: [sonarr](https://sonarr.tv/)
Homepages: <https://sonarr.tv/>
**Sonarr** is a PVR for Usenet and BitTorrent users. It can monitor multiple RSS feeds for new episodes of your favorite shows and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available.
@ -9,7 +10,6 @@ Set `sonarr_enabled: true` in your `/inventories/[my inventory]/group_vars/nas.y
The Sonarr web interface can be found at `http://ansible_nas_host_or_ip:8989` by default
## Specific Configuration
**First make sure Sonarr has permissions to write and read the `/download` and `/tv` folders**. Do this by ensuring the `sonarr_movies_directory:` and `sonarr_download_directory` settings are correct.

View file

@ -1,8 +1,9 @@
# Speedtest-Tracker
Homepage: [https://github.com/henrywhitaker3/Speedtest-Tracker](https://github.com/henrywhitaker3/Speedtest-Tracker)
Docker Container: [https://hub.docker.com/r/henrywhitaker3/speedtest-tracker](https://hub.docker.com/r/henrywhitaker3/speedtest-tracker)
Homepage: <https://github.com/henrywhitaker3/Speedtest-Tracker>
Docker Container: <https://hub.docker.com/r/henrywhitaker3/speedtest-tracker>
Continuously track your internet speed
@ -12,4 +13,4 @@ Set `speedtest_enabled: true` in your `inventories/<your_inventory>/nas.yml` fil
If you want to access Speedtest-Tracker externally, don't forget to set `speedtest_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The Speedtest-Tracker interface can be found at http://ansible_nas_host_or_ip:8765.
The Speedtest-Tracker interface can be found at <http://ansible_nas_host_or_ip:8765>.

View file

@ -1,8 +1,10 @@
# Syncthing: Open Source Continuous File Synchronisation
Homepage: [https://syncthing.net/](https://syncthing.net/)
Github: [https://github.com/syncthing/syncthing](https://github.com/syncthing/syncthing)
Docker: [https://hub.docker.com/r/syncthing/syncthing](https://hub.docker.com/r/syncthing/syncthing)
Homepage: <https://syncthing.net/>
Github: <https://github.com/syncthing/syncthing>
Docker: <https://hub.docker.com/r/syncthing/syncthing>
Syncthing is a continuous file synchronization program. It synchronizes files
between two or more computers. It strives to fulfill the goals below in summary.

View file

@ -1,7 +1,7 @@
# Tautulli
Homepage: [https://tautulli.com/](https://tautulli.com/)
Homepage: <https://tautulli.com/>
Tautulli allows you to monitor your Plex Media Server.
@ -9,4 +9,4 @@ Tautulli allows you to monitor your Plex Media Server.
Set `tautulli_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Tautulli web interface can be found at http://ansible_nas_host_or_ip:8181.
The Tautulli web interface can be found at <http://ansible_nas_host_or_ip:8181>.

View file

@ -1,6 +1,6 @@
# The Lounge
Homepage: [https://thelounge.chat/](https://thelounge.chat/)
Homepage: <https://thelounge.chat/>
The Lounge is a self-hosted web IRC client.
@ -8,7 +8,7 @@ The Lounge is a self-hosted web IRC client.
Set `thelounge_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The Lounge web interface can be found at http://ansible_nas_host_or_ip:9000.
The Lounge web interface can be found at <http://ansible_nas_host_or_ip:9000>.
## Specific Configuration

View file

@ -1,6 +1,6 @@
# TiddlyWiki
Homepage: [https://www.tiddlywiki.com/](https://www.tiddlywiki.com/)
Homepage: <https://www.tiddlywiki.com/>
TiddlyWiki is a unique non-linear notebook for capturing, organizing, and sharing complex information. Use it to keep your to-do list, to plan an essay or novel, or to organise your wedding. Record every thought that crosses your brain, or build a flexible and responsive website. Unlike conventional online services, TiddlyWiki lets you choose where to keep your data, guaranteeing that in the decades to come you will still be able to use the notes you take today.
@ -10,7 +10,7 @@ Set `tiddlywiki_enabled: true` in your `inventories/<your_inventory>/nas.yml` fi
If you want to access TiddlyWiki externally, set `tiddlywiki_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The TiddlyWiki web interface can be found at http://ansible_nas_host_or_ip:8092.
The TiddlyWiki web interface can be found at <http://ansible_nas_host_or_ip:8092>.
## Specific Configuration

View file

@ -1,7 +1,8 @@
# Time Machine
Apple docs: [https://support.apple.com/en-us/HT201250](https://support.apple.com/en-us/HT201250)
Docker image: [https://github.com/awlx/samba-timemachine](https://github.com/awlx/samba-timemachine)
Apple docs: <https://support.apple.com/en-us/HT201250>
Docker image: <https://github.com/awlx/samba-timemachine>
Time Machine is an application that allows you to backup files from your Mac.

View file

@ -1,6 +1,6 @@
# Traefik
Homepage: [https://traefik.io](https://traefik.io)
Homepage: <https://traefik.io>
Traefik is a reverse proxy used to provide external access to your Ansible-NAS box. Additionally, Traefik will automatically request and renew SSL certificates for you.
@ -13,7 +13,7 @@ See [External Access](../configuration/external_access.md) for more info.
Set `traefik_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
Traefik's web interface can be found at http://ansible_nas_host_or_ip:8083.
Traefik's web interface can be found at <http://ansible_nas_host_or_ip:8083>.
## Specific Configuration

View file

@ -1,6 +1,6 @@
# Transmission
Homepage: [https://transmissionbt.com/](https://transmissionbt.com/)
Homepage: <https://transmissionbt.com/>
Transmission is a free BitTorrent client. Two versions are provided - one that tunnels through OpenVPN and one that connects
directly.
@ -9,18 +9,17 @@ directly.
Set `transmission_enabled: true`, or `transmission_with_openvpn_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
Transmission's web interface can be found at http://ansible_nas_host_or_ip:9091 (with OpenVPN) or http://ansible_nas_host_or_ip:9092 (without OpenVPN).
Transmission's web interface can be found at <http://ansible_nas_host_or_ip:9091> (with OpenVPN) or <http://ansible_nas_host_or_ip:9092> (without OpenVPN).
## Specific Configuration
If you enable Transmission with OpenVPN, you'll need to add the following to your inventory `all.yml`:
```
```yaml
openvpn_username: super_secret_username
openvpn_password: super_secret_password
openvpn_provider: NORDVPN
openvpn_config: uk686.nordvpn.com.udp
```
See https://hub.docker.com/r/haugene/transmission-openvpn/ for supported VPN providers.
See <https://hub.docker.com/r/haugene/transmission-openvpn/> for supported VPN providers.

View file

@ -1,8 +1,10 @@
# Ubooquity Comic and Book Server
Homepage: [https://vaemendis.net/ubooquity/](https://vaemendis.net/ubooquity/)
Documentation: [https://vaemendis.github.io/ubooquity-doc/](https://vaemendis.github.io/ubooquity-doc/)
Docker Image: [https://hub.docker.com/r/linuxserver/ubooquity/](https://hub.docker.com/r/linuxserver/ubooquity/)
Homepage: <https://vaemendis.net/ubooquity/>
Documentation: <https://vaemendis.github.io/ubooquity-doc/>
Docker Image: <https://hub.docker.com/r/linuxserver/ubooquity/>
Ubooquity is a free, lightweight and easy-to-use home server for your comics and ebooks. Use it to access your files from anywhere, with a tablet, an e-reader, a phone or a computer.
@ -14,11 +16,10 @@ Access the webui at http://<server>:2202/ubooquity by default. See specific conf
## Specific Configuration
Important note: if you want to access Ubooquity externally through Traefik (at ubooquity.yourdomain.tld), you need to go to http://ansible_nas_host_or_ip:2203/ubooquity/admin and set the reverse proxy prefix to blank under "Advanced". Otherwise you will need to append "/ubooquity" to the url in order to access.
Important note: if you want to access Ubooquity externally through Traefik (at ubooquity.yourdomain.tld), you need to go to <http://ansible_nas_host_or_ip:2203/ubooquity/admin> and set the reverse proxy prefix to blank under "Advanced". Otherwise you will need to append "/ubooquity" to the url in order to access.
### Admin login
The admin portal is not exposed through Traefik. You can access the admin portal on port 2203.
Upon your first run, the address is http://ansible_nas_host_or_ip:2203/ubooquity/admin. You will be able to set the admin password here.
Upon your first run, the address is <http://ansible_nas_host_or_ip:2203/ubooquity/admin>. You will be able to set the admin password here.

View file

@ -1,8 +1,8 @@
# uTorrent
Homepage: [https://www.utorrent.com/](https://www.utorrent.com/)
Docker Container: [https://hub.docker.com/r/ekho/utorrent](https://hub.docker.com/r/ekho/utorrent)
Homepage: <https://www.utorrent.com/>
Docker Container: <https://hub.docker.com/r/ekho/utorrent>
## Usage
@ -10,7 +10,7 @@ Set `utorrent_enabled: true` in your `inventories/<your_inventory>/nas.yml` file
If you want to access uTorrent externally, don't forget to set `utorrent_available_externally: "true"` in your `inventories/<your_inventory>/nas.yml` file.
The uTorrent web interface can be found at http://ansible_nas_host_or_ip:8111/gui:
The uTorrent web interface can be found at <http://ansible_nas_host_or_ip:8111/gui>:
- Username: admin
- Password: <leave blank>
- Username: admin
- Password: <leave blank>

View file

@ -10,7 +10,7 @@ Set `virtual_desktop_enabled: true` in your `inventories/<your_inventory>/nas.ym
By default `ansible_nas_user` will be granted access with a password of `topsecret` with sudo rights. To change or add additional users override `vd_users` in your `nas.yml`:
```
```yaml
vd_users:
- username: "{{ ansible_nas_user }}"
password: "topsecret"

View file

@ -1,6 +1,6 @@
# wallabag
Homepage: [https://www.wallabag.org/](https://www.wallabag.org/)
Homepage: <https://www.wallabag.org/>
wallabag is a self-hostable PHP application allowing you to not miss any content anymore. Click, save and read it when you can. It extracts content so that you can read it when you have time.
@ -14,4 +14,4 @@ I recommend using the mobile app, which will sync with this installation so you
The default credentials are wallabag:wallabag
The wallabag web interface can be found at http://ansible_nas_host_or_ip:7780.
The wallabag web interface can be found at <http://ansible_nas_host_or_ip:7780>.

View file

@ -1,6 +1,6 @@
# Watchtower
Homepage: [https://github.com/v2tec/watchtower](https://github.com/v2tec/watchtower)
Homepage: <https://github.com/v2tec/watchtower>
A process for watching your Docker containers and automatically updating and restarting them whenever their base image is refreshed.

View file

@ -1,8 +1,8 @@
# YouTubeDL-Material
Homepage: [https://github.com/Tzahi12345/YoutubeDL-Material](https://github.com/Tzahi12345/YoutubeDL-Material)
Docker Container: [https://hub.docker.com/r/tzahi12345/youtubedl-material](https://hub.docker.com/r/tzahi12345/youtubedl-material)
Homepage: <https://github.com/Tzahi12345/YoutubeDL-Material>
Docker Container: <https://hub.docker.com/r/tzahi12345/youtubedl-material>
YoutubeDL-Material is a Material Design frontend for youtube-dl. It's coded using Angular 9 for the frontend, and Node.js on the backend.
@ -10,7 +10,7 @@ YoutubeDL-Material is a Material Design frontend for youtube-dl. It's coded usin
Set `youtubedlmaterial_enabled: true` in your `inventories/<your_inventory>/nas.yml` file.
The YouTubeDL-Material web interface can be found at http://ansible_nas_host_or_ip:8998.
The YouTubeDL-Material web interface can be found at <http://ansible_nas_host_or_ip:8998>.
## Specific Configuration

View file

@ -2,7 +2,7 @@
By default, applications can be found on the ports listed below.
# Application Ports
## Default application ports
By default, applications can be found on the ports listed below.
@ -102,4 +102,3 @@ By default, applications can be found on the ports listed below.
| Wallabag | 7780 | Bridge | HTTP |
| YouTubeDL-Mater | 8998 | Bridge | HTTP |
| ZNC | 6677 | Bridge | |

View file

@ -4,8 +4,8 @@
Ensure that you have `portainer_enabled: true` in your `group_vars/all.yml` file, and have run the playbook so that Portainer is up and running.
Hit Portainer on http://ansible_nas_host_or_ip:9000. You can now deploy an 'App Template' or head to 'Containers' and manually enter container configuration.
Hit Portainer on <http://ansible_nas_host_or_ip:9000>. You can now deploy an 'App Template' or head to 'Containers' and manually enter container configuration.
## Using a Custom Ansible Task
Needs to be docced
TODO: Needs to be docced

View file

@ -7,7 +7,7 @@ There are a number of steps required to enable external access to the applicatio
- Router configuration
- Enable specific applications for external access
## :skull: :skull: :skull: Warning! :skull: :skull: :skull:
## 💀💀💀 Warning! 💀💀💀
Enabling access to applications externally **does not** automatically secure them. If you can access an application from within your own network without a username and password, this will also be the case externally.

View file

@ -4,8 +4,7 @@ Ansible-NAS uses the awesome [bertvv.samba](https://github.com/bertvv/ansible-ro
## Share Examples
Ansible-NAS shares are defined in the `samba_shares` section within `group_vars/all.yml`. The examples provided are
"public" shares that anyone on your LAN can read and write to.
Ansible-NAS shares are defined in the `samba_shares` section within `group_vars/all.yml`. The examples provided are "public" shares that anyone on your LAN can read and write to.
## File Permissions

View file

@ -1,15 +1,11 @@
# Home
![Ansible-NAS Logo](https://raw.githubusercontent.com/davestephens/ansible-nas/master/misc/ansible-nas.png "Ansible-NAS Logo")
After getting burned by broken FreeNAS updates one too many times, I figured I
could do a much better job myself using just a stock Ubuntu install, some clever
Ansible config and a bunch of Docker containers. Ansible-NAS was born!
After getting burned by broken FreeNAS updates one too many times, I figured I could do a much better job myself using just a stock Ubuntu install, some clever Ansible config and a bunch of Docker containers. Ansible-NAS was born!
## Getting Started
Head to [installation](installation.md) if you're ready to roll, or to
[testing](testing.md) if you want to spin up a test Virtual Machine first. Once
you're done, check out the [post-installation](post_installation.md) steps.
Head to [installation](installation.md) if you're ready to roll, or to [testing](testing.md) if you want to spin up a test Virtual Machine first. Once you're done, check out the [post-installation](post_installation.md) steps.
If this is all very confusing, there is also an [overview](overview.md) of the
project and what is required for complete beginners. If you're only confused
about ZFS, we'll help you [get started](zfs/zfs_overview.md) as well.
If this is all very confusing, there is also an [overview](overview.md) of the project and what is required for complete beginners. If you're only confused about ZFS, we'll help you [get started](zfs/zfs_overview.md) as well.

View file

@ -1,15 +1,16 @@
:skull: :skull: :skull: Before running anything, check out the playbook and understand what it
does. Run it against a VM and make sure you're happy. ***Do not*** blindly
download code from the internet and trust that it's going to work as you expect.
:skull: :skull: :skull:
# Installation
💀 💀 💀
Before running anything, check out the playbook and understand what it does. Run it against a VM and make sure you're happy. ***Do not*** blindly download code from the internet and trust that it's going to work as you expect.
💀 💀 💀
## Read This First...
Calling this page "installation" is a bit of a misnomer. Ansible-NAS isn't *installed* per-se, it is a bunch of automation that installs other software onto your server. Ansible-NAS relies heavily on Ansible's [variable prescedence](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable) to do its job. Ansible-NAS
defines its installable software with roles with (mostly) sane defaults, these can then be enabled and the settings overridden in your inventory `nas.yml` file.
Calling this page "installation" is a bit of a misnomer. Ansible-NAS isn't *installed* per-se, it is a bunch of automation that installs other software onto your server. Ansible-NAS relies heavily on Ansible's [variable prescedence](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable) to do its job. Ansible-NAS defines its installable software with roles with (mostly) sane defaults, these can then be enabled and the settings overridden in your inventory `nas.yml` file.
A basic level of understanding of Ansible is required, or you're going to have a confusing time setting up your NAS. If you're willing to learn then great, but please don't raise issues because this is the first time you've looked at Ansible and you don't understand
why it's doing what it's doing. I'd love to teach the world Ansible...but I have a day job.
A basic level of understanding of Ansible is required, or you're going to have a confusing time setting up your NAS. If you're willing to learn then great, but please don't raise issues because this is the first time you've looked at Ansible and you don't understand why it's doing what it's doing. I'd love to teach the world Ansible...but I have a day job.
## Running Ansible-NAS

View file

@ -1,89 +1,33 @@
Ansible-NAS currently assumes you know your way around a server. This page is an
overview for absolute NAS beginners so they can decide if it is right for them.
# Overview
Ansible-NAS currently assumes you know your way around a server. This page is an overview for absolute NAS beginners so they can decide if it is right for them.
## The big picture
To start off _really_ simple: A NAS ([Network Attached
Storage](https://en.wikipedia.org/wiki/Network-attached_storage)) is a server
mostly for home or other small networks that offers file storage. It's usually a
small box that sits in the corner and runs 24/7. These days, a NAS doesn't just
only handle files, but also offers other services, for instance video streaming
with [Plex](https://www.plex.tv/) or [Emby](https://emby.media/index.html). You
can buy consumer NAS boxes from [various
manufacturers](https://en.wikipedia.org/wiki/List_of_NAS_manufacturers) where
you just have to add the hard drives, or you can configure your own hardware and
use open-source software as the operating system.
To start off _really_ simple: A NAS ([Network Attached Storage](https://en.wikipedia.org/wiki/Network-attached_storage)) is a server mostly for home or other small networks that offers file storage. It's usually a small box that sits in the corner and runs 24/7. These days, a NAS doesn't just only handle files, but also offers other services, for instance video streaming with [Plex](https://www.plex.tv/) or [Emby](https://emby.media/index.html). You can buy consumer NAS boxes from [various manufacturers](https://en.wikipedia.org/wiki/List_of_NAS_manufacturers) where you just have to add the hard drives, or you can configure your own hardware and use open-source software as the operating system.
One example of the second variant you'll see mentioned here is
[FreeNAS](https://freenas.org/). It is based on
[FreeBSD](https://www.freebsd.org/), which like Linux belongs to the family of
Unix-like operating systems. One strength of FreeBSD/FreeNAS is that it
includes the powerful ZFS file system
([OpenZFS](http://www.open-zfs.org/wiki/Main_Page), to be exact). However, it
does not support the [Docker](https://www.docker.com/) containers the way Linux
does. Also, the Linux ecosystem is larger. On the other hand, very few Linux
distributions include ZFS out of the box because of licensing issues.
One example of the second variant you'll see mentioned here is [FreeNAS](https://freenas.org/). It is based on [FreeBSD](https://www.freebsd.org/), which like Linux belongs to the family of Unix-like operating systems. One strength of FreeBSD/FreeNAS is that it includes the powerful ZFS file system ([OpenZFS](http://www.open-zfs.org/wiki/Main_Page), to be exact). However, it does not support the [Docker](https://www.docker.com/) containers the way Linux does. Also, the Linux ecosystem is larger. On the other hand, very few Linux distributions include ZFS out of the box because of licensing issues.
Ansible-NAS in its default form attempts to have the best of both worlds by
using Docker on Linux with ZFS. This is possible because the
[Ubuntu](https://www.ubuntu.com/server) Linux distribution supports both
technologies. As the name says, Ansible-NAS uses
[Ansible](https://www.ansible.com/) server automation which is usually deployed
on big multi-machine enterprise systems, not small home servers the size of a
breadbox.
Ansible-NAS in its default form attempts to have the best of both worlds by using Docker on Linux with ZFS. This is possible because the [Ubuntu](https://www.ubuntu.com/server) Linux distribution supports both technologies. As the name says, Ansible-NAS uses [Ansible](https://www.ansible.com/) server automation which is usually deployed on big multi-machine enterprise systems, not small home servers the size of a breadbox.
## Before you take the plunge
The commercial NAS vendors try to make setting up and running a NAS as simple
and painless as possible - for a fee, obviously. The open-source NAS software
providers have lots of resources to help you get started with your own hardware.
FreeNAS for instance comes with extensive documentation, good introductions to
ZFS and other topics, and a large community to lean on.
The commercial NAS vendors try to make setting up and running a NAS as simple and painless as possible - for a fee, obviously. The open-source NAS software providers have lots of resources to help you get started with your own hardware. FreeNAS for instance comes with extensive documentation, good introductions to ZFS and other topics, and a large community to lean on.
With Ansible-NAS, at this point at least, you're pretty much on your own. Though
there is a [Gitter](https://gitter.im/Ansible-NAS/Chat) chat room (see
[support](support.md)), you're expected to have some familiarity with the
technologies involved and be able to set up the basic stuff yourself.
With Ansible-NAS, at this point at least, you're pretty much on your own. Though there is a [Gitter](https://gitter.im/Ansible-NAS/Chat) chat room (see [support](support.md)), you're expected to have some familiarity with the technologies involved and be able to set up the basic stuff yourself.
As a to-do list, before you can even install Ansible-NAS, you'll have to:
1. Choose, buy, configure, and test your own **hardware**. If you're paranoid (a
good mindset when dealing with servers), you'll probably want an
uninterruptible power supply (UPS) of some sort as well as SMART monitoring
for your hard drives. See the [FreeNAS hardware
requirements](https://freenas.org/hardware-requirements/) as a guideline, but
remember you'll also be running Docker. If you use ZFS (see below), take into
account it [loves RAM](zfs/zfs_overview.md) and prefers to have the hard
drives all to itself.
1. Choose, buy, configure, and test your own **hardware**. If you're paranoid (a good mindset when dealing with servers), you'll probably want an uninterruptible power supply (UPS) of some sort as well as SMART monitoring for your hard drives. See the [FreeNAS hardware requirements](https://freenas.org/hardware-requirements/) as a guideline, but remember you'll also be running Docker. If you use ZFS (see below), take into account it [loves RAM](zfs/zfs_overview.md) and prefers to have the hard drives all to itself.
1. Install **Ubuntu Server**, currently 20.04 LTS, and keep it updated. You'll
probably want to perform other basic setup tasks like hardening SSH and
including email notifications. There are [various
guides](https://devanswers.co/ubuntu-20-04-initial-server-setup/) for this,
but if you're just getting started, you'll probably need a book.
1. Install **Ubuntu Server**, currently 20.04 LTS, and keep it updated. You'll probably want to perform other basic setup tasks like hardening SSH and including email notifications. There are [various guides](https://devanswers.co/ubuntu-20-04-initial-server-setup/) for this, but if you're just getting started, you'll probably need a book.
You will probably want to install a specialized filesystem for bulk storage such
as [ZFS](http://www.open-zfs.org/wiki/Main_Page) or
[Btrfs](https://btrfs.wiki.kernel.org/index.php/Main_Page). Both offer features
such as snapshots, checksumming and scrubbing to protect your data against
bitrot, ransomware and other nasties. Ansible-NAS historically prefers **ZFS**
because this lets you swap storage pools with
[FreeNAS](https://freenas.org/zfs/). A [brief introduction](zfs/zfs_overview.md)
to ZFS is included in the Ansible-NAS documentation, as well as [an
example](zfs/zfs_configuration.md) of a very simple ZFS setup.
You will probably want to install a specialized filesystem for bulk storage such as [ZFS](http://www.open-zfs.org/wiki/Main_Page) or [Btrfs](https://btrfs.wiki.kernel.org/index.php/Main_Page). Both offer features such as snapshots, checksumming and scrubbing to protect your data against bitrot, ransomware and other nasties. Ansible-NAS historically prefers **ZFS** because this lets you swap storage pools with [FreeNAS](https://freenas.org/zfs/). A [brief introduction](zfs/zfs_overview.md) to ZFS is included in the Ansible-NAS documentation, as well as [an example](zfs/zfs_configuration.md) of a very simple ZFS setup.
After that, you can continue with the actual [installation](installation.md) of
Ansible-NAS.
After that, you can continue with the actual [installation](installation.md) of Ansible-NAS.
## How to experiment
The easiest way to take Ansible-NAS for a spin is in a virtual machine, for
instance in [VirtualBox](https://www.virtualbox.org/) or
[libvirt](https://libvirt.org). You'll want to create three virtual hard drives
for testing: one of the actual NAS, and the two others to create a mirrored ZFS
pool. This will let you experiment with installing, configuring, and running a
complete system.
The easiest way to take Ansible-NAS for a spin is in a virtual machine, for instance in [VirtualBox](https://www.virtualbox.org/) or [libvirt](https://libvirt.org). You'll want to create three virtual hard drives for testing: one of the actual NAS, and the two others to create a mirrored ZFS pool. This will let you experiment with installing, configuring, and running a complete system.
A [Vagrant](https://vagrantup.com) _Vagrantfile_ and launch script are also
available (`tests/test-vagrant.sh`), see the [testing](testing.md) page for more
details.
A [Vagrant](https://vagrantup.com) _Vagrantfile_ and launch script are also available (`tests/test-vagrant.sh`), see the [testing](testing.md) page for more details.

View file

@ -4,11 +4,8 @@
Look through the `roles` directory in the Ansible-NAS source code for applications to enable.
If you see something you like, read its docs to find out what variable you need to set in your inventory `nas.yml`, and set it to true.
Run the playbook again, and you're done.
If you see something you like, read its docs to find out what variable you need to set in your inventory `nas.yml`, and set it to true. Run the playbook again, and you're done.
## Configure Heimdall
[Heimdall](https://heimdall.site/) is configured out of the box to give you a dashboard that pulls together all the applications you install with Ansible-NAS.

View file

@ -1,8 +1,8 @@
## Vagrant
# Vagrant
A [Vagrant](https://www.vagrantup.com/) Vagrantfile and launch script (`tests/test-vagrant.sh`) are provided to spin up a testing VM. The config in `tests/test.yml` is used by the script to override any existing config in `group_vars/all.yml`.
By default the VM will be available on 172.30.1.5. If everything has worked correctly after running `tests/test-vagrant.sh`, you should be able to connect to Heimdall on http://172.30.1.5:10080.
By default the VM will be available on 172.30.1.5. If everything has worked correctly after running `tests/test-vagrant.sh`, you should be able to connect to Heimdall on <http://172.30.1.5:10080>.
After making changes to the playbook, you can apply them to the running VM by running `vagrant provision`.

View file

@ -11,9 +11,7 @@ This will make updates from `master` much simpler, as there will be no requireme
Instructions to upgrade from prior to January 2020 ([this]([this](https://github.com/davestephens/ansible-nas/commit/52c7fef3aba08e30331931747c81fb7b3bfd359a)) commit or earlier):
- Move your `group_vars/all.yml` somewhere safe.
- Pull from master. There shouldn't be any merge conflicts unless you've been hacking on the project.
- Create your own inventory and config files by copying `inventories/sample` to your own directory:
`cp -rfp inventories/sample inventories/my-ansible-nas`
@ -23,6 +21,4 @@ Instructions to upgrade from prior to January 2020 ([this]([this](https://github
- Then:
- **Quick and Dirty:** Copy the contents of your `all.yml` into `inventories/my-ansible-nas/group_vars/nas.yml`.
- **Nice and Tidy:** Copy only the differences between your own `all.yml` and the distribution `group_vars/all.yml` into `inventories/my-ansible-nas/group_vars/nas.yml`. This is likely to be things like `ansible_nas_hostname`, `samba_shares`, `ansible_nas_timezone`, enabled applications, any application tweaks you've made in config etc.

View file

@ -1,298 +1,182 @@
# ZFS Configuration
This text deals with specific ZFS configuration questions for Ansible-NAS. If
you are new to ZFS and are looking for the big picture, please read the [ZFS
overview](zfs_overview.md) introduction first.
This text deals with specific ZFS configuration questions for Ansible-NAS. If you are new to ZFS and are looking for the big picture, please read the [ZFS overview](zfs_overview.md) introduction first.
## Just so there is no misunderstanding
Unlike other NAS variants, Ansible-NAS does not install, configure or manage the
disks or file systems for you. It doesn't care which file system you use - ZFS,
Btrfs, XFS or EXT4, take your pick. Nor does it provides a mechanism for
snapshots or disk monitoring. As Tony Stark said to Loki in _Avengers_: It's all
on you.
Unlike other NAS variants, Ansible-NAS does not install, configure or manage the disks or file systems for you. It doesn't care which file system you use - ZFS, Btrfs, XFS or EXT4, take your pick. Nor does it provides a mechanism for snapshots or disk monitoring. As Tony Stark said to Loki in _Avengers_: It's all on you.
However, Ansible-NAS has traditionally been used with the powerful ZFS
filesystem. Since out of the box support for [ZFS on
Linux](https://zfsonlinux.org/) with Ubuntu is comparatively new, this text
shows how to set up a simple storage configuration. To paraphrase Nick Fury from
_Winter Soldier_: We do share. We're nice like that.
However, Ansible-NAS has traditionally been used with the powerful ZFS filesystem. Since out of the box support for [ZFS on Linux](https://zfsonlinux.org/) with Ubuntu is comparatively new, this text shows how to set up a simple storage configuration. To paraphrase Nick Fury from _Winter Soldier_: We do share. We're nice like that.
> Using ZFS for Docker containers is currently not covered by this document. See
> [the official Docker ZFS
> documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/)
> instead.
> Using ZFS for Docker containers is currently not covered by this document. See [the official Docker ZFS documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/) instead.
## The obligatory warning
We take no responsibility for any bad thing that might happen if you follow this
guide. We strongly suggest you test these procedures in a virtual machine first.
Always, always, always backup your data.
We take no responsibility for any bad thing that might happen if you follow this guide. We strongly suggest you test these procedures in a virtual machine first. Always, always, always backup your data.
## The basic setup
For this example, we're assuming two identical spinning rust hard drives for
Ansible-NAS storage. These two drives will be **mirrored** to provide
redundancy. The actual Ubuntu system will be on a different drive and is not our
concern.
For this example, we're assuming two identical spinning rust hard drives for Ansible-NAS storage. These two drives will be **mirrored** to provide redundancy. The actual Ubuntu system will be on a different drive and is not our concern.
> [Root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS.html)
is possible, but not something that has been tested with Ansible-NAS.
> [Root on ZFS](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS.html) is possible, but not something that has been tested with Ansible-NAS.
The Ubuntu kernel is already ready for ZFS. We only need the utility package
which we install with `sudo apt install zfsutils`.
The Ubuntu kernel is already ready for ZFS. We only need the utility package which we install with `sudo apt install zfsutils`.
### Creating a pool
We assume you don't mind totally destroying whatever data might be on your two
storage drives, have used a tool such as `gparted` to remove any existing
partitions, and have installed a new GPT partition table on each drive. To
create our ZFS pool, we will use a command in this form:
We assume you don't mind totally destroying whatever data might be on your two storage drives, have used a tool such as `gparted` to remove any existing partitions, and have installed a new GPT partition table on each drive. To create our ZFS pool, we will use a command in this form:
```
```bash
sudo zpool create -o ashift=<ASHIFT> <NAME> mirror <DRIVE1> <DRIVE2>
```
The options from simple to complex are:
**NAME**: ZFS pools traditionally take their names from characters in the [The
Matrix](https://www.imdb.com/title/tt0133093/fullcredits). The two most common
are `tank` and `dozer`. Whatever you use, it should be short - think `ash`, not
`xenomorph`.
**NAME**: ZFS pools traditionally take their names from characters in the [The Matrix](https://www.imdb.com/title/tt0133093/fullcredits). The two most common are `tank` and `dozer`. Whatever you use, it should be short - think `ash`, not `xenomorph`.
**DRIVES**: The Linux command `lsblk` will give you a quick overview of the
hard drives in the system. However, we don't pass the drive specification in the
format `/dev/sde` because this is not persistent. Instead,
[always use](https://github.com/zfsonlinux/zfs/wiki/FAQ#selecting-dev-names-when-creating-a-pool)
the output of `ls /dev/disk/by-id/` to find the drives' IDs.
**DRIVES**: The Linux command `lsblk` will give you a quick overview of the hard drives in the system. However, we don't pass the drive specification in the format `/dev/sde` because this is not persistent. Instead, [always use](https://github.com/zfsonlinux/zfs/wiki/FAQ#selecting-dev-names-when-creating-a-pool) the output of `ls /dev/disk/by-id/` to find the drives' IDs.
**ASHIFT**: This is required to pass the [sector
size](https://github.com/zfsonlinux/zfs/wiki/FAQ#advanced-format-disks) of the
drive to ZFS for optimal performance. You might have to do this by hand because
some drives lie: Whereas modern drives have 4k sector sizes (or 8k for many
SSDs), they will report 512 bytes because Windows XP [can't handle 4k
sectors](https://support.microsoft.com/en-us/help/2510009/microsoft-support-policy-for-4k-sector-hard-drives-in-windows).
ZFS tries to [catch the
liars](https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c) and
use the correct value. However, this sometimes fails, and you have to add it by
hand.
**ASHIFT**: This is required to pass the [sector size](https://github.com/zfsonlinux/zfs/wiki/FAQ#advanced-format-disks) of the drive to ZFS for optimal performance. You might have to do this by hand because some drives lie: Whereas modern drives have 4k sector sizes (or 8k for many SSDs), they will report 512 bytes because Windows XP [can't handle 4k sectors](https://support.microsoft.com/en-us/help/2510009/microsoft-support-policy-for-4k-sector-hard-drives-in-windows). ZFS tries to [catch the liars](https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c) and use the correct value. However, this sometimes fails, and you have to add it by hand.
The `ashift` value is a power of two, so we have **9** for 512 bytes, **12** for
4k, and **13** for 8k. You can create a pool without this parameter and then use
`zdb -C | grep ashift` to see what ZFS generated automatically. If it isn't what
you think, destroy the pool again and add it manually.
The `ashift` value is a power of two, so we have **9** for 512 bytes, **12** for 4k, and **13** for 8k. You can create a pool without this parameter and then use `zdb -C | grep ashift` to see what ZFS generated automatically. If it isn't what you think, destroy the pool again and add it manually.
In our pretend case, we use two 3 TB WD Red drives. Listing all drives by ID
gives us something like this, but with real serial numbers:
In our pretend case, we use two 3 TB WD Red drives. Listing all drives by ID gives us something like this, but with real serial numbers:
```
```raw
ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN01
ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN02
```
WD Reds have a 4k sector size. The actual command to create the pool would then be:
```
```bash
sudo zpool create -o ashift=12 tank mirror ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN01 ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN02
```
Our new pool is named `tank` and is mirrored. To see information about it, use
`zpool status tank` (no `sudo` necessary). If you screwed up (usually with
`ashift`), use `sudo zpool destroy tank` and start over _now_ before it's too
late.
Our new pool is named `tank` and is mirrored. To see information about it, use `zpool status tank` (no `sudo` necessary). If you screwed up (usually with `ashift`), use `sudo zpool destroy tank` and start over _now_ before it's too late.
### Pool and filesystem properties
Pools have properties that apply either to the pool itself or to filesystems
created in the pool. You can use the command `zpool get all tank` to see the
pool properties and `zfs get all tank` to see the filesystem properties. Most
default values are perfectly sensible, some you'll [want to
change](https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/). Setting
defaults makes life easier when we create our filesystems.
Pools have properties that apply either to the pool itself or to filesystems created in the pool. You can use the command `zpool get all tank` to see the pool properties and `zfs get all tank` to see the filesystem properties. Most default values are perfectly sensible, some you'll [want to change](https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/). Setting defaults makes life easier when we create our filesystems.
```
```bash
sudo zpool set autoexpand=on tank
sudo zfs set atime=off tank
sudo zfs set compression=lz4 tank
```
`autoexpand=on` lets the pool grow when you add larger hard drives. `atime=off`
means that your system won't update a time stamp every time a file is accessed,
something which would use a lot of resources. Usually, you don't care.
Compression is a no-brainer on modern CPUs and should be on by default (we will
discuss exceptions for compressed media files later).
`autoexpand=on` lets the pool grow when you add larger hard drives. `atime=off` means that your system won't update a time stamp every time a file is accessed, something which would use a lot of resources. Usually, you don't care. Compression is a no-brainer on modern CPUs and should be on by default (we will discuss exceptions for compressed media files later).
## Creating filesystems
To actually store the data, we need filesystems (also known as "datasets"). For
our very simple default Ansible-NAS setup, we will create two: One filesystem
for movies (`movies_root` in `all.yml`) and one for downloads
(`downloads_root`).
To actually store the data, we need filesystems (also known as "datasets"). For our very simple default Ansible-NAS setup, we will create two: One filesystem for movies (`movies_root` in `all.yml`) and one for downloads (`downloads_root`).
### Movies (and other large, pre-compressed files)
We first create the basic filesystem:
```
```bash
sudo zfs create tank/movies
```
Movie files are usually rather large, already in a compressed format and for
security reasons, the files stored there shouldn't be executable. We change the
properties of the filesystem accordingly:
Movie files are usually rather large, already in a compressed format and for security reasons, the files stored there shouldn't be executable. We change the properties of the filesystem accordingly:
```
```bash
sudo zfs set recordsize=1M tank/movies
sudo zfs set compression=off tank/movies
sudo zfs set exec=off tank/movies
```
The **recordsize** here is set to the currently largest possible value [to
increase performance](https://jrs-s.net/2019/04/03/on-zfs-recordsize/) and save
storage. Recall that we used `ashift` during the creation of the pool to match
the ZFS block size with the drives' sector size. Records are created out of
these blocks. Having larger records reduces the amount of metadata that is
required, because various parts of ZFS such as caching and checksums work on
this level.
The **recordsize** here is set to the currently largest possible value [to increase performance](https://jrs-s.net/2019/04/03/on-zfs-recordsize/) and save storage. Recall that we used `ashift` during the creation of the pool to match the ZFS block size with the drives' sector size. Records are created out of these blocks. Having larger records reduces the amount of metadata that is required, because various parts of ZFS such as caching and checksums work on this level.
**Compression** is unnecessary for movie files because they are usually in a
compressed format anyway. ZFS is good about recognizing this, and so if you
happen to leave compression on as the default for the pool, it won't make much
of a difference.
**Compression** is unnecessary for movie files because they are usually in a compressed format anyway. ZFS is good about recognizing this, and so if you happen to leave compression on as the default for the pool, it won't make much of a difference.
[By default](https://zfsonlinux.org/manpages/0.7.13/man8/zfs.8.html#lbAI), ZFS
stores pools directly under the root directory. Also, the filesystems don't have
to be listed in `/etc/fstab` to be mounted. This means that our filesystem will
appear as `/tank/movies` if you don't change anything. We need to change the
line in `all.yml` accordingly:
[By default](https://zfsonlinux.org/manpages/0.7.13/man8/zfs.8.html#lbAI), ZFS stores pools directly under the root directory. Also, the filesystems don't have to be listed in `/etc/fstab` to be mounted. This means that our filesystem will appear as `/tank/movies` if you don't change anything. We need to change the line in `all.yml` accordingly:
```
```raw
movies_root: "/tank/movies"
```
You can also set a traditional mount point if you wish with the `mountpoint`
property. Setting this to `none` prevents the file system from being
automatically mounted at all.
You can also set a traditional mount point if you wish with the `mountpoint` property. Setting this to `none` prevents the file system from being automatically mounted at all.
The filesystems for TV shows, music files and podcasts - all large,
pre-compressed files - should probably take the exact same parameters.
The filesystems for TV shows, music files and podcasts - all large, pre-compressed files - should probably take the exact same parameters.
### Downloads
For downloads, we can leave most of the default parameters the way they are.
```
```raw
sudo zfs create tank/downloads
sudo zfs set exec=off tank/downloads
```
The recordsize stays the 128 KB default. In `all.yml`, the new line is
```
```raw
downloads_root: "/tank/downloads"
```
### Other data
Depending on the use case, you might want to create and tune more filesystems.
For example, [Bit
Torrent](http://open-zfs.org/wiki/Performance_tuning#Bit_Torrent),
[MySQL](http://open-zfs.org/wiki/Performance_tuning#MySQL) and [Virtual
Machines](http://open-zfs.org/wiki/Performance_tuning#Virtual_machines) all have
known best configurations.
Depending on the use case, you might want to create and tune more filesystems. For example, [BitTorrent](http://open-zfs.org/wiki/Performance_tuning#Bit_Torrent), [MySQL](http://open-zfs.org/wiki/Performance_tuning#MySQL) and [Virtual Machines](http://open-zfs.org/wiki/Performance_tuning#Virtual_machines) all have known best configurations.
## Setting up scrubs
On Ubuntu, scrubs are configured out of the box to run on the second Sunday of
every month. See `/etc/cron.d/zfsutils-linux` to change this.
On Ubuntu, scrubs are configured out of the box to run on the second Sunday of every month. See `/etc/cron.d/zfsutils-linux` to change this.
## Email notifications
To have the [ZFS
demon](http://manpages.ubuntu.com/manpages/bionic/man8/zed.8.html) `zed` send
you emails when there is trouble, you first have to [install an email
agent](https://www.reddit.com/r/zfs/comments/90prt4/zed_config_on_ubuntu_1804/)
such as postfix. In the file `/etc/zfs/zed.d/zed.rc`, change the three entries:
To have the [ZFS demon](http://manpages.ubuntu.com/manpages/bionic/man8/zed.8.html) `zed` send you emails when there is trouble, you first have to [install an email agent](https://www.reddit.com/r/zfs/comments/90prt4/zed_config_on_ubuntu_1804/) such as postfix. In the file `/etc/zfs/zed.d/zed.rc`, change the three entries:
```
```bash
ZED_EMAIL_ADDR=<YOUR_EMAIL_ADDRESS_HERE>
ZED_NOTIFY_INTERVAL_SECS=3600
ZED_NOTIFY_VERBOSE=1
```
If `zed` is not enabled, you might have to run `systemctl enable zed`. You can
test the setup by manually starting a scrub with `sudo zpool scrub tank`.
If `zed` is not enabled, you might have to run `systemctl enable zed`. You can test the setup by manually starting a scrub with `sudo zpool scrub tank`.
## Snapshots
Snapshots create a "frozen" version of a filesystem, providing a safe copy of
the contents. Correctly configured, they provide good protection against
accidental deletion and certain types of attacks such as ransomware. On
copy-on-write (COW) filesystems such as ZFS, they are cheap and fast to create.
It is very rare that you _won't_ want snapshots.
> Snapshots do not replace the need for backups. Nothing replaces the need for
> backups except more backups.
Snapshots create a "frozen" version of a filesystem, providing a safe copy of the contents. Correctly configured, they provide good protection against accidental deletion and certain types of attacks such as ransomware. On copy-on-write (COW) filesystems such as ZFS, they are cheap and fast to create. It is very rare that you _won't_ want snapshots.
> Snapshots do not replace the need for backups. Nothing replaces the need for backups except more backups.
### Managing snapshots by hand
If you have data in a filesystem that never or very rarely changes, it might be
easiest to just take a snapshot by hand after every major change. Use the `zfs
snapshot` command with the name of the filesystem combined with an identifier
separated by the `@` sign. Traditionally, this somehow includes the date of the
snapshot, usually in some variant of the [ISO
8601](https://en.wikipedia.org/wiki/ISO_8601) format.
If you have data in a filesystem that never or very rarely changes, it might be easiest to just take a snapshot by hand after every major change. Use the `zfs snapshot` command with the name of the filesystem combined with an identifier separated by the `@` sign. Traditionally, this somehow includes the date of the snapshot, usually in some variant of the [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format.
```
```bash
zfs snapshot tank/movies@2019-04-24
```
To see the list of snapshots in the system, run
```
```bash
zfs list -t snapshot
```
To revert ("roll back") to the previous snapshot, use the `zfs rollback`
command.
To revert ("roll back") to the previous snapshot, use the `zfs rollback` command.
```
```bash
zfs rollback tank/movies@2019-04-24
```
By default, you can only roll back to the most recent snapshot. Anything before
then requires trickery outside the scope of this document. Finally, to get rid
of a snapshot, use the `zfs destroy` command.
By default, you can only roll back to the most recent snapshot. Anything before then requires trickery outside the scope of this document. Finally, to get rid of a snapshot, use the `zfs destroy` command.
```
```bash
zfs destroy tank/movies@2019-04-24
```
> Be **very** careful with `destroy`. If you leave out the snapshot identifier
> and only list the filesystem - in our example, `tank/movies` - the filesystem
> itself will immediately be destroyed. There will be no confirmation prompt,
> because ZFS doesn't believe in that sort of thing.
> Be **very** careful with `destroy`. If you leave out the snapshot identifier and only list the filesystem - in our example, `tank/movies` - the filesystem itself will immediately be destroyed. There will be no confirmation prompt, because ZFS doesn't believe in that sort of thing.
### Managing snapshots with Sanoid
Usually, you'll want the process of creating new and deleting old snapshots to
be automatic, especially on filesystems that change frequently. One tool for
this is [sanoid](https://github.com/jimsalterjrs/sanoid/). There are various
instructions for setting it up, the following is based on notes from
[SvennD](https://www.svennd.be/zfs-snapshots-of-proxmox-using-sanoid/). For this
example, we'll assume we have a single dataset `tank/movies` that holds, ah,
movies.
Usually, you'll want the process of creating new and deleting old snapshots to be automatic, especially on filesystems that change frequently. One tool for this is [sanoid](https://github.com/jimsalterjrs/sanoid/). There are various instructions for setting it up, the following is based on notes from [SvennD](https://www.svennd.be/zfs-snapshots-of-proxmox-using-sanoid/). For this example, we'll assume we have a single dataset `tank/movies` that holds, ah, movies.
First, we install sanoid to the `/opt` directory. This assumes that Perl itself
is already installed.
First, we install sanoid to the `/opt` directory. This assumes that Perl itself is already installed.
```
```bash
sudo apt install libconfig-inifiles-perl libcapture-tiny-perl
cd /opt
sudo git clone https://github.com/jimsalterjrs/sanoid
@ -300,34 +184,27 @@ is already installed.
It is probably easiest to link sanoid to `/usr/sbin`:
```
```bash
sudo ln /opt/sanoid/sanoid /usr/sbin/
```
Then we need to setup the configuration files.
Then we need to setup the configuration files
```
```bash
sudo mkdir /etc/sanoid
sudo cp /opt/sanoid/sanoid.conf /etc/sanoid/sanoid.conf
sudo cp /opt/sanoid/sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf
```
We don't change the defaults file, but it has to be copied to the folder anyway.
Next, we edit the `/etc/sanoid/sanoid.conf` configuration file in two steps: We
design the "templates" and then tell sanoid which filesystems to use it on.
We don't change the defaults file, but it has to be copied to the folder anyway. Next, we edit the `/etc/sanoid/sanoid.conf` configuration file in two steps: We design the "templates" and then tell sanoid which filesystems to use it on.
The configuration file included with sanoid contains a "production" template for
filesystems that change frequently. For media files, we assume that there is not
going to be that much change from day-to-day, and especially there will be very
few deletions. We use snapshots because this provides protection against
cryptolocker attacks and against accidental deletions.
The configuration file included with sanoid contains a "production" template for filesystems that change frequently. For media files, we assume that there is not going to be that much change from day-to-day, and especially there will be very few deletions. We use snapshots because this provides protection against cryptolocker attacks and against accidental deletions
> Again, snapshots, even lots of snapshots, do not replace backups.
For our example, we configure for two hourly snapshots (against "oh crap"
deletions), 31 daily, one monthly and one yearly snapshot.
For our example, we configure for two hourly snapshots (against "oh crap" deletions), 31 daily, one monthly and one yearly snapshot.
```
```raw
[template_media]
frequently = 0
hourly = 2
@ -338,26 +215,24 @@ deletions), 31 daily, one monthly and one yearly snapshot.
autoprune = yes
```
That might seem like a bunch of daily snapshots, but remember, if nothing has
changed, a ZFS snapshot is basically free.
That might seem like a bunch of daily snapshots, but remember, if nothing has changed, a ZFS snapshot is basically free.
Once we have an entry for the template, we assign it to the filesystem.
```
```raw
[tank/movies]
use_template = media
```
Finally, we edit `/etc/crontab` to run sanoid every five minutes:
```
```raw
*/5 * * * * root /usr/sbin/sanoid --cron
```
After five minutes, you should see the first snapshots (use `zfs list -t
snapshot` again). The list will look something like this mock example:
After five minutes, you should see the first snapshots (use `zfs list -t snapshot` again). The list will look something like this mock example:
```
```raw
NAME USED AVAIL REFER MOUNTPOINT
tank/movies@autosnap_2019-05-17_13:55:01_yearly 0B - 1,53G -
tank/movies@autosnap_2019-05-17_13:55:01_monthly 0B - 1,53G -
@ -366,6 +241,4 @@ tank/movies@autosnap_2019-05-17_13:55:01_daily 0B - 1,53G -
Note that the snapshots use no storage, because we haven't changed anything.
This is a very simple use of sanoid. Other functions include running scripts
before and after snapshots, and setups to help with backups. See the included
configuration files for examples.
This is a very simple use of sanoid. Other functions include running scripts before and after snapshots, and setups to help with backups. See the included configuration files for examples.

View file

@ -1,232 +1,110 @@
# ZFS Overview
This is a general overview of the ZFS file system for people who are new to it.
If you have some experience and are actually looking for specific information
about how to configure ZFS for Ansible-NAS, check out the [ZFS example
configuration](zfs_configuration.md).
This is a general overview of the ZFS file system for people who are new to it. If you have some experience and are actually looking for specific information about how to configure ZFS for Ansible-NAS, check out the [ZFS example configuration](zfs_configuration.md).
## What is ZFS and why would I want it?
[ZFS](https://en.wikipedia.org/wiki/ZFS) is an advanced filesystem and volume
manager originally created by Sun Microsystems starting in 2001. First released
in 2005 for OpenSolaris, Oracle later bought Sun and switched to developing ZFS
as closed source software. An open source fork took the name
[OpenZFS](http://www.open-zfs.org/wiki/Main_Page), but is still called "ZFS" for
short. It runs on Linux, FreeBSD, illumos and other platforms.
[ZFS](https://en.wikipedia.org/wiki/ZFS) is an advanced filesystem and volume manager originally created by Sun Microsystems starting in 2001. First released in 2005 for OpenSolaris, Oracle later bought Sun and switched to developing ZFS as closed source software. An open source fork took the name [OpenZFS](http://www.open-zfs.org/wiki/Main_Page), but is still called "ZFS" for short. It runs on Linux, FreeBSD, illumos and other platforms.
ZFS aims to be the ["last word in
filesystems"](https://blogs.oracle.com/bonwick/zfs:-the-last-word-in-filesystems),
a technology so future-proof that Michael W. Lucas and Allan Jude famously
stated that the _Enterprise's_ computer on _Star Trek_ probably runs it. The
design was based on [four
principles](https://www.youtube.com/watch?v=MsY-BafQgj4):
filesystems"](https://blogs.oracle.com/bonwick/zfs:-the-last-word-in-filesystems), a technology so future-proof that Michael W. Lucas and Allan Jude famously stated that the _Enterprise's_ computer on _Star Trek_ probably runs it. The design was based on [four principles](https://www.youtube.com/watch?v=MsY-BafQgj4):
1. "Pooled" storage to eliminate the notion of volumes. You can add more storage
the same way you just add a RAM stick to memory.
1. "Pooled" storage to eliminate the notion of volumes. You can add more storage the same way you just add a RAM stick to memory.
1. Make sure data is always consistent on the disks. There is no `fsck` command
for ZFS and none is needed.
1. Make sure data is always consistent on the disks. There is no `fsck` command for ZFS and none is needed.
1. Detect and correct data corruption ("bitrot"). ZFS is one of the few storage
systems that checksums everything, including the data itself, and is
"self-healing".
1. Detect and correct data corruption ("bitrot"). ZFS is one of the few storage systems that checksums everything, including the data itself, and is "self-healing".
1. Make it easy to use. Try to "end the suffering" for the admins involved in
managing storage.
1. Make it easy to use. Try to "end the suffering" for the admins involved in managing storage.
ZFS includes a host of other features such as snapshots, transparent compression
and encryption. During the early years of ZFS, this all came with hardware
requirements only enterprise users could afford. By now, however, computers have
become so powerful that ZFS can run (with some effort) on a [Raspberry
Pi](https://gist.github.com/mohakshah/b203d33a235307c40065bdc43e287547).
ZFS includes a host of other features such as snapshots, transparent compression and encryption. During the early years of ZFS, this all came with hardware requirements only enterprise users could afford. By now, however, computers have become so powerful that ZFS can run (with some effort) on a [Raspberry Pi](https://gist.github.com/mohakshah/b203d33a235307c40065bdc43e287547).
FreeBSD and FreeNAS make extensive use of ZFS. What is holding ZFS back on Linux
are [licensing issues](https://en.wikipedia.org/wiki/OpenZFS#History) beyond the
scope of this document.
FreeBSD and FreeNAS make extensive use of ZFS. What is holding ZFS back on Linux are [licensing issues](https://en.wikipedia.org/wiki/OpenZFS#History) beyond the scope of this document.
Ansible-NAS doesn't actually specify a filesystem - you can use EXT4, XFS or
Btrfs as well. However, ZFS not only provides the benefits listed above, but
also lets you use your hard drives with different operating systems. Some people
now using Ansible-NAS came from FreeNAS, and were able to `export` their ZFS
storage drives there and `import` them to Ubuntu. On the other hand, if you ever
decide to switch back to FreeNAS or maybe want to use FreeBSD instead of Linux,
you should be able to use the same ZFS pools.
Ansible-NAS doesn't actually specify a filesystem - you can use EXT4, XFS or Btrfs as well. However, ZFS not only provides the benefits listed above, but also lets you use your hard drives with different operating systems. Some people now using Ansible-NAS came from FreeNAS, and were able to `export` their ZFS storage drives there and `import` them to Ubuntu. On the other hand, if you ever decide to switch back to FreeNAS or maybe want to use FreeBSD instead of Linux, you should be able to use the same ZFS pools
## An overview and some actual commands
Storage in ZFS is organized in **pools**. Inside these pools, you create
**filesystems** (also known as "datasets") which are like partitions on
steroids. For instance, you can keep each user's `/home` directory in a separate
filesystem. ZFS systems tend to use lots and lots of specialized filesystems
with tailored parameters such as record size and compression. All filesystems
share the available storage in their pool.
Storage in ZFS is organized in **pools**. Inside these pools, you create **filesystems** (also known as "datasets") which are like partitions on steroids. For instance, you can keep each user's `/home` directory in a separate filesystem. ZFS systems tend to use lots and lots of specialized filesystems with tailored parameters such as record size and compression. All filesystems share the available storage in their pool.
Pools do not directly consist of hard disks or SSDs. Instead, drives are
organized as **virtual devices** (VDEVs). This is where the physical redundancy
in ZFS is located. Drives in a VDEV can be "mirrored" or combined as "RaidZ",
roughly the equivalent of RAID5. These VDEVs are then combined into a pool by the
administrator. The command might look something like this:
Pools do not directly consist of hard disks or SSDs. Instead, drives are organized as **virtual devices** (VDEVs). This is where the physical redundancy in ZFS is located. Drives in a VDEV can be "mirrored" or combined as "RaidZ", roughly the equivalent of RAID5. These VDEVs are then combined into a pool by the administrator. The command might look something like this:
```
```bash
sudo zpool create tank mirror /dev/sda /dev/sdb
```
This combines `/dev/sba` and `/dev/sdb` to a mirrored VDEV, and then defines a
new pool named `tank` consisting of this single VDEV. (Actually, you'd want to
use a different ID for the drives, but you get the idea.) You can now create a
filesystem in this pool for, say, all of your _Mass Effect_ fan fiction:
This combines `/dev/sba` and `/dev/sdb` to a mirrored VDEV, and then defines a new pool named `tank` consisting of this single VDEV. (Actually, you'd want to use a different ID for the drives, but you get the idea.) You can now create a filesystem in this pool for, say, all of your _Mass Effect_ fan fiction:
```
```bash
sudo zfs create tank/mefanfic
```
You can then enable automatic compression on this filesystem with `sudo zfs set
compression=lz4 tank/mefanfic`. To take a **snapshot**, use
You can then enable automatic compression on this filesystem with `sudo zfs set compression=lz4 tank/mefanfic`. To take a **snapshot**, use
```
```bash
sudo zfs snapshot tank/mefanfic@21540411
```
Now, if evil people were somehow able to encrypt your precious fan fiction files
with ransomware, you can simply laugh maniacally and revert to the old version:
Now, if evil people were somehow able to encrypt your precious fan fiction files with ransomware, you can simply laugh maniacally and revert to the old version:
```
```bash
sudo zfs rollback tank/mefanfic@21540411
```
Of course, you would lose any texts you might have added to the filesystem
between that snapshot and now. Usually, you'll have some form of **automatic
snapshot administration** configured.
To detect bitrot and other data defects, ZFS periodically runs **scrubs**: The
system compares the available copies of each data record with their checksums.
If there is a mismatch, the data is repaired.
Of course, you would lose any texts you might have added to the filesystem between that snapshot and now. Usually, you'll have some form of **automatic snapshot administration** configured.
To detect bitrot and other data defects, ZFS periodically runs **scrubs**: The system compares the available copies of each data record with their checksums. If there is a mismatch, the data is repaired.
## Known issues
> At time of writing (April 2019), ZFS on Linux does not offer native
> encryption, TRIM support or device removal, which are all scheduled to be
> included in the upcoming [0.8
> release](https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-0.8-RC1-Released)
> any day now.
> At time of writing (April 2019), ZFS on Linux does not offer native encryption, TRIM support or device removal, which are all scheduled to beincluded in the upcoming [0.8 release](https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-0.8-RC1-Released) any day now.
ZFS' original design for enterprise systems and redundancy requirements can make
some things difficult. You can't just add individual drives to a pool and tell
the system to reconfigure automatically. Instead, you have to either add a new
VDEV, or replace each of the existing drives with one of higher capacity. In an
enterprise environment, of course, you would just _buy_ a bunch of new drives
and move the data from the old pool to the new pool. Shrinking a pool is even
harder - put simply, ZFS is not built for this, though it is [being worked
on](https://www.delphix.com/blog/delphix-engineering/openzfs-device-removal).
ZFS' original design for enterprise systems and redundancy requirements can make some things difficult. You can't just add individual drives to a pool and tell the system to reconfigure automatically. Instead, you have to either add a new VDEV, or replace each of the existing drives with one of higher capacity. In an enterprise environment, of course, you would just _buy_ a bunch of new drives and move the data from the old pool to the new pool. Shrinking a pool is even harder - put simply, ZFS is not built for this, though it is [being worked on](https://www.delphix.com/blog/delphix-engineering/openzfs-device-removal).
If you absolutely must be able to add or remove single drives, ZFS might not be
the filesystem for you.
If you absolutely must be able to add or remove single drives, ZFS might not be the filesystem for you
## Myths and misunderstandings
Information on the internet about ZFS can be outdated, conflicting or flat-out
wrong. Partially this is because it has been in use for almost 15 years now and
things change, partially it is the result of being used on different operating
systems which have minor differences under the hood. Also, Google searches tend
to first return the Oracle documentation for their closed source ZFS variant,
which is increasingly diverging from the open source OpenZFS standard.
Information on the internet about ZFS can be outdated, conflicting or flat-out wrong. Partially this is because it has been in use for almost 15 years now and things change, partially it is the result of being used on different operating systems which have minor differences under the hood. Also, Google searches tend to first return the Oracle documentation for their closed source ZFS variant, which is increasingly diverging from the open source OpenZFS standard.
To clear up some of the most common misunderstandings:
### No, ZFS does not need at least 8 GB of RAM
This myth is especially common [in FreeNAS
circles](https://www.ixsystems.com/community/threads/does-freenas-really-need-8gb-of-ram.38685/).
Curiously, FreeBSD, the basis of FreeNAS, will run with [1
GB](https://wiki.freebsd.org/ZFSTuningGuide). The [ZFS on Linux
FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#hardware-requirements), which is
more relevant for Ansible-NAS, states under "suggested hardware":
This myth is especially common [in FreeNAS circles](https://www.ixsystems.com/community/threads/does-freenas-really-need-8gb-of-ram.38685/). Curiously, FreeBSD, the basis of FreeNAS, will run with [1 GB](https://wiki.freebsd.org/ZFSTuningGuide). The [ZFS on Linux FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#hardware-requirements), which is more relevant for Ansible-NAS, states under "suggested hardware":
> 8GB+ of memory for the best performance. It's perfectly possible to run with
> 2GB or less (and people do), but you'll need more if using deduplication.
(Deduplication is only useful in [special
cases](http://open-zfs.org/wiki/Performance_tuning#Deduplication). If you are
reading this, you probably don't need it.)
(Deduplication is only useful in [special cases](http://open-zfs.org/wiki/Performance_tuning#Deduplication). If you are reading this, you probably don't need it.)
Experience shows that 8 GB of RAM is in fact a sensible minimal amount for
continuous use. But it's not a requirement. What everybody agrees on is that ZFS
_loves_ RAM and works better the more it has, so you should have as much of it
as you possibly can. When in doubt, add more RAM, and even more, and them some,
until your motherboard's capacity is reached.
Experience shows that 8 GB of RAM is in fact a sensible minimal amount for continuous use. But it's not a requirement. What everybody agrees on is that ZFS _loves_ RAM and works better the more it has, so you should have as much of it as you possibly can. When in doubt, add more RAM, and even more, and them some, until your motherboard's capacity is reached.
### No, ECC RAM is not required for ZFS
This is another case where a recommendation has been taken as a requirement. To
quote the [ZFS on Linux
FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#do-i-have-to-use-ecc-memory-for-zfs)
again:
This is another case where a recommendation has been taken as a requirement. To quote the [ZFS on Linux FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#do-i-have-to-use-ecc-memory-for-zfs) again:
> Using ECC memory for OpenZFS is strongly recommended for enterprise
> environments where the strongest data integrity guarantees are required.
> Without ECC memory rare random bit flips caused by cosmic rays or by faulty
> memory can go undetected. If this were to occur OpenZFS (or any other
> filesystem) will write the damaged data to disk and be unable to automatically
> detect the corruption.
> Using ECC memory for OpenZFS is strongly recommended for enterprise environments where the strongest data integrity guarantees are required. Without ECC memory rare random bit flips caused by cosmic rays or by faulty memory can go undetected. If this were to occur OpenZFS (or any other filesystem) will write the damaged data to disk and be unable to automatically detect the corruption.
ECC corrects [single bit errors](https://en.wikipedia.org/wiki/ECC_memory) in
memory. It is _always_ better to have it on _any_ computer if you can afford it,
and ZFS is no exception. However, there is absolutely no requirement for ZFS to
have ECC RAM. If you just don't care about the danger of random bit flips
because, hey, you can always just download [Night of the Living
Dead](https://archive.org/details/night_of_the_living_dead) all over again,
you're perfectly free to use normal RAM. If you do use ECC RAM, make sure your
processor and motherboard support it.
ECC corrects [single bit errors](https://en.wikipedia.org/wiki/ECC_memory) in memory. It is _always_ better to have it on _any_ computer if you can afford it, and ZFS is no exception. However, there is absolutely no requirement for ZFS to have ECC RAM. If you just don't care about the danger of random bit flips because, hey, you can always just download [Night of the Living Dead](https://archive.org/details/night_of_the_living_dead) all over again, you're perfectly free to use normal RAM. If you do use ECC RAM, make sure your processor and motherboard support it.
### No, the SLOG is not really a write cache
You'll read the suggestion to add a fast SSD or NVMe as a "SLOG drive"
(mistakenly also called "ZIL") for write caching. This isn't what happens,
because ZFS already includes [a write
cache](https://linuxhint.com/configuring-zfs-cache/) in RAM. Since RAM is always
faster, adding a disk as a write cache doesn't even make sense.
You'll read the suggestion to add a fast SSD or NVMe as a "SLOG drive" (mistakenly also called "ZIL") for write caching. This isn't what happens, because ZFS already includes [a write cache](https://linuxhint.com/configuring-zfs-cache/) in RAM. Since RAM is always faster, adding a disk as a write cache doesn't even make sense
What the **ZFS Intent Log (ZIL)** does, with or without a dedicated drive, is handle
synchronous writes. These occur when the system refuses to signal a successful
write until the data is actually stored on a physical disk somewhere. This keeps
the data safe, but is slower.
What the **ZFS Intent Log (ZIL)** does, with or without a dedicated drive, is handle synchronous writes. These occur when the system refuses to signal a successful write until the data is actually stored on a physical disk somewhere. This keeps the data safe, but is slower.
By default, the ZIL initially shoves a copy of the data on a normal VDEV
somewhere and then gives the thumbs up. The actual write to the pool is
performed later from the write cache in RAM, _not_ the temporary copy. The data
there is only ever read if the power fails before the last step. The ZIL is all
about protecting data, not making transfers faster.
By default, the ZIL initially shoves a copy of the data on a normal VDEV somewhere and then gives the thumbs up. The actual write to the pool is performed later from the write cache in RAM, _not_ the temporary copy. The data there is only ever read if the power fails before the last step. The ZIL is all about protecting data, not making transfers faster.
A **Separate Intent Log (SLOG)** is an additional fast drive for these temporary
synchronous writes. It simply allows the ZIL give the thumbs up quicker. This
means that a SLOG is never read unless the power has failed before the final
write to the pool.
A **Separate Intent Log (SLOG)** is an additional fast drive for these temporary synchronous writes. It simply allows the ZIL give the thumbs up quicker. This means that a SLOG is never read unless the power has failed before the final write to the pool.
Asynchronous writes just go through the normal write cache, by the way. If the
power fails, the data is gone.
In summary, the ZIL prevents data loss during synchronous writes, or at least
ensures that the data in storage is consistent. You always have a ZIL. A SLOG
will make the ZIL faster. You'll probably need to [do some
research](https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/)
and some testing to figure out if your system would benefit from a SLOG. NFS for
instance uses synchronous writes, SMB usually doesn't. When in doubt, add more
RAM instead.
Asynchronous writes just go through the normal write cache, by the way. If the power fails, the data is gone.
In summary, the ZIL prevents data loss during synchronous writes, or at least ensures that the data in storage is consistent. You always have a ZIL. A SLOG will make the ZIL faster. You'll probably need to [do some research](https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/) and some testing to figure out if your system would benefit from a SLOG. NFS for instance uses synchronous writes, SMB usually doesn't. When in doubt, add more RAM instead.
## Further reading and viewing
- In 2012, Aaron Toponce wrote a now slightly dated, but still very good
[introduction](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/)
to ZFS on Linux. If you only read one part, make it the [explanation of the
ARC](https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/),
ZFS' read cache.
- One of the best books on ZFS around is _FreeBSD Mastery: ZFS_ by Michael W.
Lucas and Allan Jude. Though it is written for FreeBSD, the general guidelines
apply for all variants. There is a second volume for advanced use.
- Jeff Bonwick, one of the original creators of ZFS, tells the story of how ZFS
came to be [on YouTube](https://www.youtube.com/watch?v=dcV2PaMTAJ4).
- In 2012, Aaron Toponce wrote a now slightly dated, but still very good [introduction](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/) to ZFS on Linux. If you only read one part, make it the [explanation of the ARC](https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/), ZFS' read cache.
- One of the best books on ZFS around is _FreeBSD Mastery: ZFS_ by Michael W. Lucas and Allan Jude. Though it is written for FreeBSD, the general guidelines apply for all variants. There is a second volume for advanced use.
- Jeff Bonwick, one of the original creators of ZFS, tells the story of how ZFS came to be [on YouTube](https://www.youtube.com/watch?v=dcV2PaMTAJ4).

View file

@ -16,16 +16,6 @@
create_home: no
group: gitlab
- name: Create Gitlab user account
user:
name: gitlab
uid: 998
state: present
system: yes
update_password: on_create
create_home: no
group: gitlab
- name: Create Gitlab Directories
file:
path: "{{ item }}"