Merge pull request #79 from scotws/zfs

Add general and specific ZFS documentation
This commit is contained in:
David Stephens 2019-04-25 23:51:04 +01:00 committed by GitHub
commit d4d9cd9f6d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 493 additions and 42 deletions

View file

@ -27,4 +27,5 @@ Head to [installation](installation.md) if you're ready to roll, or to
you're done, check out the [post-installation](post_installation.md) steps.
If this is all very confusing, there is also an [overview](overview.md) of the
project and what is required for complete beginners.
project and what is required for complete beginners. If you're only confused
abot ZFS, we'll help you [get started](zfs_overview.md) as well.

View file

@ -8,13 +8,17 @@ You can run Ansible-NAS from the computer you plan to use for your NAS, or from
1. Copy `group_vars/all.yml.dist` to `group_vars/all.yml`.
1. Open up `group_vars/all.yml` and follow the instructions there for configuring your Ansible NAS.
1. Open up `group_vars/all.yml` and follow the instructions there for
configuring your Ansible NAS.
1. If you plan to use Transmission with OpenVPN, also copy `group_vars/vpn_credentials.yml.dist` to
`group_vars/vpn_credentials.yml` and fill in your settings.
1. If you plan to use Transmission with OpenVPN, also copy
`group_vars/vpn_credentials.yml.dist` to `group_vars/vpn_credentials.yml` and
fill in your settings.
1. Copy `inventory.dist` to `inventory` and update it.
1. Install the dependent roles: `ansible-galaxy install -r requirements.yml` (you might need sudo to install Ansible roles)
1. Install the dependent roles: `ansible-galaxy install -r requirements.yml`
(you might need sudo to install Ansible roles)
1. Run the playbook - something like `ansible-playbook -i inventory nas.yml -b -K` should do you nicely.
1. Run the playbook - something like `ansible-playbook -i inventory nas.yml -b
-K` should do you nicely.

View file

@ -47,41 +47,30 @@ technologies involved and be able to set up the basic stuff yourself.
As a to-do list, before you can even install Ansible-NAS, you'll have to:
1. Choose, buy, configure, and test your own **hardware**. Note that ZFS loves
RAM - it will run [with 1 GB](https://wiki.freebsd.org/ZFSTuningGuide), but
it won't be happy. The ZFS on Linux (ZoL) people
[recommend](https://github.com/zfsonlinux/zfs/wiki/FAQ#hardware-requirements)
at least 8 GB for best performance, but the more, the better. As robust as
ZFS is, it assumes the data in memory is correct, so [very bad
things](http://research.cs.wisc.edu/adsl/Publications/zfs-corruption-fast10.pdf)
happen to your data if there is memory corruption. For this reason, it is
[strongly
recommended](https://github.com/zfsonlinux/zfs/wiki/FAQ#do-i-have-to-use-ecc-memory-for-zfs)
to use ECC RAM. ZFS also prefers to have the hard drives all to itself. If
you're paranoid (a good mindset when dealing with servers), you'll probably
want an uninterruptible power supply (UPS) of some sort as well and SMART
monitoring for your hard drives. See the [FreeNAS hardware
requirements](https://freenas.org/hardware-requirements/) as a guideline.
1. Choose, buy, configure, and test your own **hardware**. If you're paranoid (a
good mindset when dealing with servers), you'll probably want an
uninterruptible power supply (UPS) of some sort as well as SMART monitoring
for your hard drives. See the [FreeNAS hardware
requirements](https://freenas.org/hardware-requirements/) as a guideline, but
remember you'll also be running Docker. If you use ZFS (see below), take into
account it [loves RAM](zfs/zfs_overview.md) and prefers to have the hard
drives all to itself.
1. Install **Ubuntu Server**, preferably a Long Term Support (LTS) edition such
as 18.04, and keep it updated. You'll probably want to perform other basic
setup tasks like hardening SSH and including email notifications. There are
[various guides](https://devanswers.co/ubuntu-18-04-initial-server-setup/)
for this, but if you're just getting started, you'll probably need a book.
1. Install **ZFS** and set up storage. This includes creating data sets for
various parts of the system, some form of automatic snapshot handling, and
possibly automatic backups to another server or an external hard drive.
Currently on Linux, it is [something of a
hassle](https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS) to
use ZFS on the root file system. If you are completely new to ZFS, expect a
brutal learning curve. There is a slightly dated (2012) but extensive
[introduction to ZFS on
Linux](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/) by
Aaron Toponce to get you started, or you can watch [this
video](https://www.youtube.com/watch?v=MsY-BafQgj4) that introduces the
philosophy and big picture of ZFS.
1. Install **Ubuntu Server**, currently 18.04 LTS, and keep it updated. You'll
probably want to perform other basic setup tasks like hardening SSH and
including email notifications. There are [various
guides](https://devanswers.co/ubuntu-18-04-initial-server-setup/) for this,
but if you're just getting started, you'll probably need a book.
You will probably want to install a specialized filesystem for bulk storage such
as [ZFS](http://www.open-zfs.org/wiki/Main_Page) or
[Btrfs](https://btrfs.wiki.kernel.org/index.php/Main_Page). Both offer features
such as snapshots, checksumming and scrubing to protect your data against
bitrot, ransomware and other nasties. Ansible-NAS historically prefers **ZFS**
because this lets you swap storage pools with
[FreeNAS](https://freenas.org/zfs/). A [brief introduction](zfs/zfs_overview.md)
to ZFS is included in the Ansible-NAS documentation, as well as [an
example](zfs_configuration.md) of a very simple ZFS setup.
After that, you can continue with the actual [installation](installation.md) of
Ansible-NAS.
@ -91,6 +80,5 @@ Ansible-NAS.
The easiest way to take Ansible-NAS for a spin is in a virtual machine, for
instance in [VirtualBox](https://www.virtualbox.org/). You'll want to create
three virtual hard drives for testing: One of the actual NAS, and the two others
to create a mirrored ZFS pool. Note because of the RAM requirements of ZFS,
you might run into problems with a virtual machine, but this will let you
experiment with installing, configuring, and running a complete system.
to create a mirrored ZFS pool. This will let you experiment with installing,
configuring, and running a complete system.

View file

@ -0,0 +1,228 @@
This text deals with specific ZFS configuration questions for Ansible-NAS. If
you are new to ZFS and are looking for the big picture, please read the [ZFS
overview](zfs_overview.md) introduction first.
## Just so there is no misunderstanding
Unlike other NAS variants, Ansible-NAS does not install, configure or manage the
disks or file systems for you. It doesn't care which file system you use - ZFS,
Btrfs, XFS or EXT4, take your pick. Nor does it provides a mechanism for
snapshots or disk monitoring. As Tony Stark said to Loki in _Avengers_: It's all
on you.
However, Ansible-NAS has traditionally been used with the powerful ZFS
filesystem. Since out of the box support for [ZFS on
Linux](https://zfsonlinux.org/) with Ubuntu is comparatively new, this text
shows how to set up a simple storage configuration. To paraphrase Nick Fury from
_Winter Soldier_: We do share. We're nice like that.
> Using ZFS for Docker containers is currently not covered by this document. See
> [the official Docker ZFS
> documentation](https://docs.docker.com/storage/storagedriver/zfs-driver/)
> instead.
## The obligatory warning
We take no responsibility for any bad thing that might happen if you follow this
guide. We strongly suggest you test these procedures in a virtual machine first.
Always, always, always backup your data.
## The basic setup
For this example, we're assuming two identical spinning rust hard drives for
Ansible-NAS storage. These two drives will be **mirrored** to provide
redundancy. The actual Ubuntu system will be on a different drive and is not our
concern.
> [Root on ZFS](https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS)
> is still a hassle for Ubuntu. If that changes, this document might be updated
> accordingly. Until then, don't ask us about it.
The Ubuntu kernel is already ready for ZFS. We only need the utility package
which we install with `sudo apt install zfsutils`.
### Creating a pool
We assume you don't mind totally destroying whatever data might be on your two
storage drives, have used a tool such as `gparted` to remove any existing
partitions, and have installed a new GPT partition table on each drive. To
create our ZFS pool, we will use a command in this form:
```
sudo zpool create -o ashift=<ASHIFT> <NAME> mirror <DRIVE1> <DRIVE2>
```
The options from simple to complex are:
**NAME**: ZFS pools traditionally take their names from characters in the [The
Matrix](https://www.imdb.com/title/tt0133093/fullcredits). The two most common
are `tank` and `dozer`. Whatever you use, it should be short - think `ash`, not
`xenomorph`.
**DRIVES**: The Linux command `lsblk` will give you a quick overview of the
hard drives in the system. However, we don't pass the drive specification in the
format `/dev/sde` because this is not persistent. Instead,
[always use](https://github.com/zfsonlinux/zfs/wiki/FAQ#selecting-dev-names-when-creating-a-pool)
the output of `ls /dev/disk/by-id/` to find the drives' IDs.
**ASHIFT**: This is required to pass the [sector
size](https://github.com/zfsonlinux/zfs/wiki/FAQ#advanced-format-disks) of the
drive to ZFS for optimal performance. You might have to do this by hand because
some drives lie: Whereas modern drives have 4k sector sizes (or 8k for many
SSDs), they will report 512 bytes because Windows XP [can't handle 4k
sectors](https://support.microsoft.com/en-us/help/2510009/microsoft-support-policy-for-4k-sector-hard-drives-in-windows).
ZFS tries to [catch the
liars](https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c) and
use the correct value. However, this sometimes fails, and you have to add it by
hand.
The `ashift` value is a power of two, so we have **9** for 512 bytes, **12** for
4k, and **13** for 8k. You can create a pool without this parameter and then use
`zdb -C | grep ashift` to see what ZFS generated automatically. If it isn't what
you think, destroy the pool again and add it manually.
In our pretend case, we use two 3 TB WD Red drives. Listing all drives by ID
gives us something like this, but with real serial numbers:
```
ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN01
ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN02
```
WD Reds have a 4k sector size. The actual command to create the pool would then be:
```
sudo zpool create -o ashift=12 tank mirror ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN01 ata-WDC_WD30EFRX-68EUZN0_WD-WCCFAKESN02
```
Our new pool is named `tank` and is mirrored. To see information about it, use
`zpool status tank` (no `sudo` necessary). If you screwed up (usually with
`ashift`), use `sudo zpool destroy tank` and start over _now_ before it's too
late.
### Pool default parameters
Setting pool-wide default parameters makes life easier when we create our
filesystems. To see them all, you can use the command `zfs get all tank`. Most
are perfectly sensible, some you'll [want to
change](https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/):
```
sudo zfs set atime=off tank
sudo zfs set compression=lz4 tank
sudo zfs set autoexpand=on tank
```
The `atime` parameter means that your system updates a time stamp every time a
file is accessed, which uses a lot of resources. Usually, you don't care.
Compression is a no-brainer on modern CPUs and should be on by default (we will
discuss exceptions for compressed media files later). The `autoexpand` lets the
pool grow when you add larger hard drives.
## Creating filesystems
To actually store the data, we need filesystems (also known as "datasets"). For
our very simple default Ansible-NAS setup, we will create two: One filesystem
for movies (`movies_root` in `all.yml`) and one for downloads
(`downloads_root`).
### Movies (and other large, pre-compressed files)
We first create the basic filesystem:
```
sudo zfs create tank/movies
```
Movie files are usually rather large, already in a compressed format and for
security reasons, the files stored there shouldn't be executable. We change the
properties of the filesystem accordingly:
```
sudo zfs set recordsize=1M tank/movies
sudo zfs set compression=off tank/movies
sudo zfs set exec=off tank/movies
```
The **recordsize** here is set to the currently largest possible value [to
increase performance](https://jrs-s.net/2019/04/03/on-zfs-recordsize/) and save
storage. Recall that we used `ashift` during the creation of the pool to match
the ZFS block size with the drives' sector size. Records are created out of
these blocks. Having larger records reduces the amount of metadata that is
required, because various parts of ZFS such as caching and checksums work on
this level.
**Compression** is unnecessary for movie files because they are usually in a
compressed format anyway. ZFS is good about recognizing this, and so if you
happen to leave compression on as the default for the pool, it won't make much
of a difference.
[By default](https://zfsonlinux.org/manpages/0.7.13/man8/zfs.8.html#lbAI), ZFS
stores pools directly under the root directory. Also, the filesystems don't have
to be listed in `/etc/fstab` to be mounted. This means that our filesystem will
appear as `/tank/movies` if you don't change anything. We need to change the
line in `all.yml` accordingly:
```
movies_root: "/tank/movies"
```
You can also set a traditional mount point if you wish with the `mountpoint`
property. Setting this to `none` prevents the file system from being
automatically mounted at all.
The filesystems for TV shows, music files and podcasts - all large,
pre-compressed files - should probably take the exact same parameters.
### Downloads
For downloads, we can leave most of the default parameters the way they are.
```
sudo zfs create tank/downloads
sudo zfs set exec=off tank/downloads
```
The recordsize stays the 128 KB default. In `all.yml`, the new line is
```
downloads_root: "/tank/downloads"
```
### Other data
Depending on the use case, you might want to create and tune more filesystems.
For example, [Bit
Torrent](http://open-zfs.org/wiki/Performance_tuning#Bit_Torrent),
[MySQL](http://open-zfs.org/wiki/Performance_tuning#MySQL) and [Virtual
Machines](http://open-zfs.org/wiki/Performance_tuning#Virtual_machines) all have
known best configurations.
## Setting up scrubs
On Ubuntu, scrubs are configured out of the box to run on the second Sunday of
every month. See `/etc/cron.d/zfsutils-linux` to change this.
## Email notifications
To have the [ZFS
demon](http://manpages.ubuntu.com/manpages/bionic/man8/zed.8.html) `zed` send
you emails when there is trouble, you first have to [install an email
agent](https://www.reddit.com/r/zfs/comments/90prt4/zed_config_on_ubuntu_1804/)
such as postfix. In the file `/etc/zfs/zed.d/zed.rc`, change the three entries:
```
ZED_EMAIL_ADDR=<YOUR_EMAIL_ADDRESS_HERE>
ZED_NOTIFY_INTERVAL_SECS=3600
ZED_NOTIFY_VERBOSE=1
```
If `zed` is not enabled, you might have to run `systemctl enable zed`. You can
test the setup by manually starting a scrub with `sudo zpool scrub tank`.
## Setting up automatic snapshots
See [sanoid](https://github.com/jimsalterjrs/sanoid/) as a tool for snapshot
management.

230
docs/zfs/zfs_overview.md Normal file
View file

@ -0,0 +1,230 @@
This is a general overview of the ZFS file system for people who are new to it.
If you have some experience and are actually looking for specific information
about how to configure ZFS for Ansible-NAS, check out the [ZFS example
configuration](zfs_configuration.md).
## What is ZFS and why would I want it?
[ZFS](https://en.wikipedia.org/wiki/ZFS) is an advanced filesystem and volume
manager originally created by Sun Microsystems starting in 2001. First released
in 2005 for OpenSolaris, Oracle later bought Sun and switched to developing ZFS
as closed source software. An open source fork took the name
[OpenZFS](http://www.open-zfs.org/wiki/Main_Page), but is still called "ZFS" for
short. It runs on Linux, FreeBSD, illumos and other platforms.
ZFS aims to be the ["last word in
filesystems"](https://blogs.oracle.com/bonwick/zfs:-the-last-word-in-filesystems),
a technology so future-proof that Michael W. Lucas and Allan Jude famously
stated that the _Enterprise's_ computer on _Star Trek_ probably runs it. The
design was based on [four
principles](https://www.youtube.com/watch?v=MsY-BafQgj4):
1. "Pooled" storage to eliminate the notion of volumes. You can add more storage
the same way you just add a RAM stick to memory.
1. Make sure data is always consistent on the disks. There is no `fsck` command
for ZFS and none is needed.
1. Detect and correct data corruption ("bitrot"). ZFS is one of the few storage
systems that checksums everything, including the data itself, and is
"self-healing".
1. Make it easy to use. Try to "end the suffering" for the admins involved in
managing storage.
ZFS includes a host of other features such as snapshots, transparent compression
and encryption. During the early years of ZFS, this all came with hardware
requirements only enterprise users could afford. By now, however, computers have
become so powerful that ZFS can run (with some effort) on a [Raspberry
Pi](https://gist.github.com/mohakshah/b203d33a235307c40065bdc43e287547).
FreeBSD and FreeNAS make extensive use of ZFS. What is holding ZFS back on Linux
are [licensing issues](https://en.wikipedia.org/wiki/OpenZFS#History) beyond the
scope of this document.
Ansible-NAS doesn't actually specify a filesystem - you can use EXT4, XFS or
Btrfs as well. However, ZFS not only provides the benefits listed above, but
also lets you use your hard drives with different operating systems. Some people
now using Ansible-NAS came from FreeNAS, and were able to `export` their ZFS
storage drives there and `import` them to Ubuntu. On the other hand, if you ever
decide to switch back to FreeNAS or maybe want to use FreeBSD instead of Linux,
you should be able to use the same ZFS pools.
## An overview and some actual commands
Storage in ZFS is organized in **pools**. Inside these pools, you create
**filesystems** (also known as "datasets") which are like partitions on
steroids. For instance, you can keep each user's `/home` directory in a separate
filesystem. ZFS systems tend to use lots and lots of specialized filesystems
with tailored parameters such as record size and compression. All filesystems
share the available storage in their pool.
Pools do not directly consist of hard disks or SSDs. Instead, drives are
organized as **virtual devices** (VDEVs). This is where the physical redundancy
in ZFS is located. Drives in a VDEV can be "mirrored" or combined as "RaidZ",
roughly the equivalent of RAID5. These VDEVs are then combined into a pool by the
administrator. The command might look something like this:
```
sudo zpool create tank mirror /dev/sda /dev/sdb
```
This combines `/dev/sba` and `/dev/sdb` to a mirrored VDEV, and then defines a
new pool named `tank` consisting of this single VDEV. (Actually, you'd want to
use a different ID for the drives, but you get the idea.) You can now create a
filesystem in this pool for, say, all of your _Mass Effect_ fan fiction:
```
sudo zfs create tank/mefanfic
```
You can then enable automatic compression on this filesystem with `sudo zfs set
compression=lz4 tank/mefanfic`. To take a **snapshot**, use
```
sudo zfs snapshot tank/mefanfic@21540411
```
Now, if evil people were somehow able to encrypt your precious fan fiction files
with ransomware, you can simply laugh maniacally and revert to the old version:
```
sudo zfs rollback tank/mefanfic@21540411
```
Of course, you would lose any texts you might have added to the filesystem
between that snapshot and now. Usually, you'll have some form of **automatic
snapshot administration** configured.
To detect bitrot and other data defects, ZFS periodically runs **scrubs**: The
system compares the available copies of each data record with their checksums.
If there is a mismatch, the data is repaired.
## Known issues
> At time of writing (April 2019), ZFS on Linux does not offer native
> encryption, TRIM support or device removal, which are all scheduled to be
> included in the upcoming [0.8
> release](https://www.phoronix.com/scan.php?page=news_item&px=ZFS-On-Linux-0.8-RC1-Released)
> any day now.
ZFS' original design for enterprise systems and redundancy requirements can make
some things difficult. You can't just add individual drives to a pool and tell
the system to reconfigure automatically. Instead, you have to either add a new
VDEV, or replace each of the existing drives with one of higher capacity. In an
enterprise environment, of course, you would just _buy_ a bunch of new drives
and move the data from the old pool to the new pool. Shrinking a pool is even
harder - put simply, ZFS is not built for this, though it is [being worked
on](https://www.delphix.com/blog/delphix-engineering/openzfs-device-removal).
If you absolutely must be able to add or remove single drives, ZFS might not be
the filesystem for you.
## Myths and misunderstandings
Information on the internet about ZFS can be outdated, conflicting or flat-out
wrong. Partially this is because it has been in use for almost 15 years now and
things change, partially it is the result of being used on different operating
systems which have minor differences under the hood. Also, Google searches tend
to first return the Oracle documentation for their closed source ZFS variant,
which is increasingly diverging from the open source OpenZFS standard.
To clear up some of the most common misunderstandings:
### No, ZFS does not need at least 8 GB of RAM
This myth is especially common [in FreeNAS
circles](https://www.ixsystems.com/community/threads/does-freenas-really-need-8gb-of-ram.38685/).
Curiously, FreeBSD, the basis of FreeNAS, will run with [1
GB](https://wiki.freebsd.org/ZFSTuningGuide). The [ZFS on Linux
FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#hardware-requirements), which is
more relevant for Ansible-NAS, states under "suggested hardware":
> 8GB+ of memory for the best performance. It's perfectly possible to run with
> 2GB or less (and people do), but you'll need more if using deduplication.
(Deduplication is only useful in [special
cases](http://open-zfs.org/wiki/Performance_tuning#Deduplication). If you are
reading this, you probably don't need it.)
Experience shows that 8 GB of RAM is in fact a sensible minimal amount for
continuous use. But it's not a requirement. What everybody agrees on is that ZFS
_loves_ RAM and works better the more it has, so you should have as much of it
as you possibly can. When in doubt, add more RAM, and even more, and them some,
until your motherboard's capacity is reached.
### No, ECC RAM is not required for ZFS
This is another case where a recommendation has been taken as a requirement. To
quote the [ZFS on Linux
FAQ](https://github.com/zfsonlinux/zfs/wiki/FAQ#do-i-have-to-use-ecc-memory-for-zfs)
again:
> Using ECC memory for OpenZFS is strongly recommended for enterprise
> environments where the strongest data integrity guarantees are required.
> Without ECC memory rare random bit flips caused by cosmic rays or by faulty
> memory can go undetected. If this were to occur OpenZFS (or any other
> filesystem) will write the damaged data to disk and be unable to automatically
> detect the corruption.
ECC corrects [single bit errors](https://en.wikipedia.org/wiki/ECC_memory) in
memory. It is _always_ better to have it on _any_ computer if you can afford it,
and ZFS is no exception. However, there is absolutely no requirement for ZFS to
have ECC RAM. If you just don't care about the danger of random bit flips
because, hey, you can always just download [Night of the Living
Dead](https://archive.org/details/night_of_the_living_dead) all over again,
you're prefectly free to use normal RAM. If you do use ECC RAM, make sure your
processor and motherboard support it.
### No, the SLOG is not really a write cache
You'll read the suggestion to add a fast SSD or NVMe as a "SLOG drive"
(mistakenly also called "ZIL") for write caching. This isn't what happens,
because ZFS already includes [a write
cache](https://linuxhint.com/configuring-zfs-cache/) in RAM. Since RAM is always
faster, adding a disk as a write cache doesn't even make sense.
What the **ZFS Intent Log (ZIL)** does, with or without a dedicated drive, is handle
synchronous writes. These occur when the system refuses to signal a successful
write until the data is actually stored on a physical disk somewhere. This keeps
the data safe, but is slower.
By default, the ZIL initially shoves a copy of the data on a normal VDEV
somewhere and then gives the thumbs up. The actual write to the pool is
performed later from the write cache in RAM, _not_ the temporary copy. The data
there is only ever read if the power fails before the last step. The ZIL is all
about protecting data, not making transfers faster.
A **Separate Intent Log (SLOG)** is an additional fast drive for these temporary
synchronous writes. It simply allows the ZIL give the thumbs up quicker. This
means that a SLOG is never read unless the power has failed before the final
write to the pool.
Asynchronous writes just go through the normal write cache, by the way. If the
power fails, the data is gone.
In summary, the ZIL prevents data loss during synchronous writes, or at least
ensures that the data in storage is consistent. You always have a ZIL. A SLOG
will make the ZIL faster. You'll probably need to [do some
research](https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/)
and some testing to figure out if your system would benefit from a SLOG. NFS for
instance uses synchronous writes, SMB usually doesn't. When in doubt, add more
RAM instead.
## Further reading and viewing
- In 2012, Aaron Toponce wrote a now slightly dated, but still very good
[introduction](https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/)
to ZFS on Linux. If you only read one part, make it the [explanation of the
ARC](https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/),
ZFS' read cache.
- One of the best books on ZFS around is _FreeBSD Mastery: ZFS_ by Michael W.
Lucas and Allan Jude. Though it is written for FreeBSD, the general guidelines
apply for all variants. There is a second volume for advanced use.
- Jeff Bonwick, one of the original creators of ZFS, tells the story of how ZFS
came to be [on YouTube](https://www.youtube.com/watch?v=dcV2PaMTAJ4).