Setup Mastodon with caddy and automatic TLS

Woss
Kelp.Digital
Published in
5 min readDec 11, 2022
kelp.community (at) Mastodon

Hi, you’ve made the right decision to move away from decaying tech like Twitter, congrats!

In this post, I will explain how to set up your Mastodon instance, how to enable ElasticSearch, and set the storage to be on self-hosted Minio or AWS S3. Extra goodies packed in this post include the automatic TLS certificate creation and renewal, gzipping the content, and reverse proxying two containers to provide the best experience with the Mastodon dashboard streaming WebSocket.

Let’s get started!

The default setup consists of these images:

  • elasticsearch
  • postgres14
  • Redis
  • sidekiq
  • mastodon -> the web and main api
  • streaming -> Websocket for the dashboard

Start with cloning the repo:

git clone https://github.com/kelp-hq/mastodon-caddy-docker.git 
cd mastodon-caddy-docker

This is your starting point for the server setup.

Server setup

Before you start with anything, execute the following command:

This will create an empty file for you to update later on, and docker will be happy.

Next is to run the mastodon setup, which will ask you some questions about your instance; this will also create the payload you will need to put in the .env.production file.

# run this in the root and answer the questions 
docker-compose run --rm web bundle exec rake mastodon:setup

It will ask you to create the admin account, which is not needed since you can always use the CLI to set your user to be the Owner.

TLS and reverse_proxy setup

Now it’s time to open the docker-compose file and check the caddy labels. This setup works with the caddy-docker-proxy, and it expects that you already have the caddy external network created and the CDP running in another container. If you do not have it, please follow this tutorial on how to set it up, then come back here and continue.

Steps:

  1. change two occurrences of the caddy: kelp.community to your domain
  2. make sure that the DNS @ record of your domain is pointing to the server where you will run the instance. If it doesn't, the caddy will not be able to issue the certificate.

Now you can start all the containers:

# start all containers
docker-compose up -d

# to see the logs, the things are not broken
docker-compose logs -f --tail=100

At this stage, you should have the following containers up and running:

Correctly running containers

And your caddy-docker-proxy should be reporting that the certificate has been acquired and created. Now go to your domain and verify that you see the new mastodon instance; if you do not see check your logs, there are usually very helpful.

Enabling the ElasticSearch

If you are using the default docker-compose, the ES container will start; now, we need to tell the Mastodon server to use it by adding this to the env file:

ES_ENABLED=true 
ES_HOST=es # docker-service name
ES_PORT=9200 # default port

After you have added this, you need to recreate the containers; the simplest way is to run docker-compose up -d , which will recreate all the containers that are affected by the env change.

To create the indexes from your DB to ElasticSearch, do the following:

# after the es is up in the we container enter the container
docker-compose exec web bash

# then execute the chewy task
RAILS_ENV=production rails chewy:deploy

Enabling Minio or AWS S3

Decoupling the file storage from the running instances is a very smart idea since you can move them much easier and faster with this setup. If you don’t use this, then the

We are running our Minio instances, and the kelp.community is connected to them. This setup is the same for the AWS S3.


S3_ENABLED=true
S3_BUCKET=my-bucket-name # set your bucket here
AWS_ACCESS_KEY_ID=my-awesome-access-ky # Minio Access keys or IAM from AWS
AWS_SECRET_ACCESS_KEY=secret-key # secret key
S3_ENDPOINT=https://the-s3-endpoint # you can see this in the AWS or Minio api server (9000)
SE_REGUI=eu-west-1 # region
S3_HOSTNAME=s3.aws.hostname # just the s3 hostname

After you have added this, you need to recreate the containers; the simplest way is to run docker-compose up -d , which will recreate all the containers that are affected by the env change.

Backups

Well, this is probably the MOST needed part when it comes to availability and crash recovery. We will be backing up only the DB since we can always regenerate the ElasticSearch and Redis data.

Before doing any backups, please open the backup-and-upload.sh and edit it according to your needs. This will include the bucket name, the path where you want your files to be stored, and the API where to connect. The script will automatically load the .env.production file and expose the vars to the internal state. If you want to change that, feel free to do so.

The backup script requires the following env variables to be exported:

  • BACKUP_AWS_ACCESS_KEY_ID -- minio or AWS access key
  • BACKUP_AWS_SECRET_ACCESS_KEY -- minio or AWS secret key
  • DB_PASS -- PostgreSQL password
  • DB_USER -- PostgreSQL username

Cronjob is set to run this once a day at midnight:

sudo crontab -e 

# add this line, ofc modified
0 0 * * * /path/of/backup-and-upload.sh > /path/oflogs/cron-executed.log

The simplest way to see did the corn ran or are there any errors is to send the output to a file. Of course, this can be improved a lot, like notifying the channel on Matrix or Discord that the cronjob is executed.

Example of the successfully executed cronjob

❯ cat cron-executed.log 
Backing up the DB ...
Backup successful, file name is 2022-12-10_12:53_db_dump.sql
Uploading to the-s3-endpoint ...
Upload done. removing the file
🎉

Extra goodies

If you haven’t changed the caddy.encode label in the docker-compose.yml file, the web and streaming containers will have gzip zstd enabled, which means that all your static assets like js and css files will be served compressed, saving your users' bandwidth. The most obvious extra goody is the TLS, but for that, all thanks go to the CDP repo and creators of the Caddy server.

I would like to know if you have had any issues with the setup or if you have any feedback on this post.

My federated handler is @woss@kelp.community; do send a message from your federated mastodon instance; let’s connect and test it. ✌️✌️

Useful links:

--

--