From 485d17854264097739df45450f749da9635d345e Mon Sep 17 00:00:00 2001 From: Gaelan Steele Date: Thu, 27 Jul 2023 20:35:45 -0700 Subject: [PATCH] Add a bit more in the way of docs. --- README.md | 5 +++++ docs/INTERNALS.md | 21 +++++++++++++++++++++ 2 files changed, 26 insertions(+) create mode 100644 docs/INTERNALS.md diff --git a/README.md b/README.md index 6021080..98c966d 100644 --- a/README.md +++ b/README.md @@ -38,6 +38,7 @@ Minifedi's goal is to "just work" on every machine. If the instructions below fa 4. `./minifedi start` 5. Wait for stuff to build then start up; this should take 20-30 minutes. 6. Your instances should be running and accessible at INSTANCENAME.lvh.me (e.g. https://mastodon.lvh.me). + - You'll have to click through an HTTPS warning; if you'd like, you can run `./minifedi install-cert` to add Minifedi's root to your system certificate store, avoiding this. (We don't do this by default, as our policy is not to touch your system configuration.) Each instance is created by default with five users: @@ -78,3 +79,7 @@ rm -r data/ This'll create a directory in `versions/mastodon`, which you can then refer to from your `config.nix`. Custom versions for Akkoma and GoToSocial aren't supported yet. + +### Use Minifedi to test some fedi software I'm hacking on locally? + +There isn't a good solution for this yet, but the plan is that you'll run your software locally however you usually do, with Minifedi's nginx running in front to serve it from a domain accessible to the other instances. diff --git a/docs/INTERNALS.md b/docs/INTERNALS.md new file mode 100644 index 0000000..d2a2844 --- /dev/null +++ b/docs/INTERNALS.md @@ -0,0 +1,21 @@ +# Minifedi internals + +Quick brain dump of how this all works: + +We run everything as the user, with no containers, VMs, etc. This means we can be lightweight, work natively on both macOS and Linux, and lose many points of failure. + +We use Nix to manage all dependencies and build copies of all the fedi software. We use [s6](https://skarnet.org/software/s6/index.html) to orchestrate all the processes. + +We run one central copy of Postgres and Redis, which all instances use. + +We also run one copy of Nginx, which serves as a proxy in front of every instance. This nginx instance listens on ports 80/443, as some (all?) fedi software won't federate with stuff running anywhere else. We use the domain `lvh.me`, which resolves all subdomains to `127.0.0.1`, to give each instance a distinct hostname. + +To minimize moving parts (what happens if a port is taken?), whenever possible we prefer Unix sockets over TCP for everything but the user-facing (and other-instance-facing) Nginx server. In cases where this isn't supported (GoToSocial's HTTP interface) we choose a deterministic random port based on the instance name and hope it's open. + +We use [mkcert](https://github.com/FiloSottile/mkcert) to generate a root CA, which is then used to sign a wildcard certificate used by Nginx to serve each instance. Each instance is then configured to trust our root, or if that's not possible (Akkoma), disable certificate checking altogether. This cert can optionally be added to the system trust store so the user can access instances through browsers, or they can just click through the HTTPS warning if they'd rather not mess with their settings. + +## Custom version support + +We allow the user to use any commit of any git repo as the source for a Mastodon instance. How this works is a little tricky, as Nix (especially with language-specifc package managers in the mix) requires several hashes and other bits of metadata to successfully build a project. To abstract this from the user, we provide a `mk-mastodon` script that fetches all this metadata and crates a `versions/mastodon` subdirectory with everything needed to build that version. + +The plan is eventually to support something similar for GoToSocial and Akkoma. For GoToSocial, it should be straightforward enough; it's just need to get done. Akkoma is going to be a little more tricky, as the Nixpkgs build script for Akkoma (which we use) hardcodes various details about its precise dependencies; the easy option would be to only support forks of the latest Akkoma version with no substantial dependency versions, but it'd be nice to do better.