Deploying Docker Compose Applications With Ansible and GitHub Actions

(You might not need Kubernetes just yet)

Jay Hardee
The Startup
9 min readAug 31, 2020

--

Photo by chuttersnap on Unsplash

Many developers reach for Kubernetes and other container orchestration solutions for deploying containerized applications. Yet, there is still a case for using plain Docker Compose. Orchestration systems entail extra maintenance costs and increase on-boarding time for new hires. A small team’s application with a handful containers won’t reap many benefits from Kubernetes. This is truer on-premise, outside of managed services like AWS EKS.

This article will show a sane strategy for deploying Docker Compose services. We will use two external services to do this. One is GitHub for CI (using Actions) and source control, and the second is Docker Hub for hosting our Docker images. We will also use Ansible for configuring our remote hosts, but you can choose your own services and tools. The concepts should transfer.

Prerequisites:

  • GitHub account
  • Docker Hub account
  • Access to a remote machine
  • Python 3 on your development machine

Strategy

Using GitHub Actions, we build each repository’s Docker image, run tests and push it to Docker Hub. Then GitHub will execute the appropriate deploy script over SSH, which restarts the app with the new Docker image(s).

To make this happen, you’ll setup some repositories in Docker Hub and GitHub, configure your remote machine with a (relatively) unprivileged user to perform the deployments, and add some keys to GitHub.

Create repositories (GitHub)

First, create two repositories in GitHub (it’ll be easier to fork mine: https://github.com/jayhardee9/hello-web-app and https://github.com/jayhardee9/reverse-proxy). One will be a Flask app, and the other will be a reverse proxy (using NGINX). If you’re creating your own repositories, call them hello-web-app and reverse-proxy. That way you change fewer names, strings, etc. in the upcoming example code.

Create repository (Docker Hub)

Docker Hub will host the Docker image for the Flask app (you don’t need a paid account). Create a repository called hello-web-app. The reverse-proxy uses a publicly available NGINX image with a custom configuration file, so we don’t need another repository for it.

Ansible Overview and Setup

We will use Ansible to set up the host — the big benefits we will get from Ansible from a maintenance standpoint will be:

  • File templates to instantiate multiple files that share the same structure
  • Encrypting secrets, like environment variables for each app and credentials
  • Retrieving public and private keys we need to set up CI
  • Easily adding more Docker Compose applications down the road

Now, I’m using a local VM running Ubuntu Server to dog-food everything here. You may have to tweak the playbooks some to get them working with your machine. Clone the docker-compose-infra repository to your local machine, and let’s go over the files therein.

The first file is hosts.yml — what is known as the inventory in Ansible terminology:

hosts.yml

It defines a single host called my-host, some variables with credentials, and specifies an apps array, which contains the app names. Instead of duplicating setup actions and files for each app, we will be able to iterate over the apps array and use templates, as we shall see later.

Following the TODOs, make the following changes:

  1. Add your GitHub account name on line 5
  2. Put the IP address or domain name of your remote host on line 12

Now to add your Docker Hub credentials, we’re going to use Ansible Vault to encrypt them first — then add them to the inventory so that they remain secret. Run these commands to install Vault (and Ansible itself):

Now, one more thing before you encrypt your credentials: pick a vault password. You’ll need it whenever you encrypt or decrypt anything.

With your password handy, run ansible-vault encrypt-string twice for encrypting your username and password. An example session with ansible-vault looks like:

You then copy the string beginning with !vault | and paste into the hosts.yml like so:

hosts.yml — with a secret

Moving on to setup.yml, you’ll see a list of Ansible tasks to run to perform the initial setup of your machine. This is an Ansible playbook — a logical grouping of tasks. It assumes you have a sudoer user on the remote server. Briefly, it performs these steps:

  • Installing Docker and Docker Compose
  • Adding a special user for performing deploys
  • Generating key pairs for GitHub -> machine, and machine -> GitHub SSH access
  • Uploading deploy scripts
  • Copying SSH keys to operator’s machine to add to GitHub and CI
  • Adding GitHub’s servers to the known-hosts file for the deploy user

A few of these points deserve a deeper explanation, beginning with the SSH setup. For each app, we’re going to generate a separate key pair. Why?

  1. GitHub deploy keys (which allow machine users, like deploy, to access repositories) have to be unique across all repositories. We have two apps. Ergo, different keys for each app.
  2. We lock down our machine user’s SSH access so that it can execute only one command — the deploy script. We wish to establish one-to-one relationships between applications, deploy scripts and SSH keys. So there is a separate script for each app, each app has its own key pair, and the public key is locked to the corresponding deploy script/app. This seems a little complicated, but Ansible loops and templates help out a lot here. See update-authorized-keys.sh in the Ansible repository.

Another SSH tidbit that’s interesting is the deploy user’s ~/.ssh/config:

Template for ~/.ssh/config

It’s a Jinja template that creates a config entry for each app in the deploy user’s SSH configuration. It tells SSH that if it sees us connecting to, for example, hello-web-app.github.com, that it should:

Why? To setup Git access, we need to use our deploy keys, which differ from app to app. When we use the git CLI tool, we need a way of telling Git, “Hey, use this key for repository A, and this other one for repository B.” So in the deploy user’s SSH config, we’re going to bind the key for hello-web-app to hello-web-app.github.com. So when we run git clone git@hello-web-app.github.com/<your account>/hello-web-app.git , it will use the correct key pair. If we just used github.com, git wouldn’t know which key to use, and we would get a unauthorized error.

There are two other playbooks in the docker-compose-infra repository: clone-projects.yml and deploy.yml. The first sets up each application’s GitHub repository and environment variable file:

The other deploys the latest commit for each app:

The deploy scripts that the latter runs deserve a few comments. Each app has its own instantiated from the deploy-app.sh.j2 template:

Each script simply pulls the latest commit from master, sets the environment variables, and spins up the Docker Compose application with the production Docker Compose config.

Set up remote machine

Let’s run the first Ansible playbook, setup.yml. Depending on your SSH setup, you may need to use different flags. Refer to ansible-playbook -h for more options.

After it completes, you should have a directory called keys— it contains a key pair for each app. The public key will be the GitHub deploy key, and the private key will allow GitHub Actions to run a deploy script. Before we run the other playbooks, we must get the keys into GitHub.

Beginning with the deploy key, copy the contents of /keys/my-host/home/deploy/.ssh/hello-web-app.id_ed25519.pub. Add the key as a deploy key for your hello-web-app repository. Do the same for reverse-proxy.

Settings > Deploy keys

Moving on to the private keys, go to Settings > Secrets of the hello-web-app repository to add a secret called SSH_KEY. The value should be the contents of /keys/my-host/home/deploy/.ssh/hello-web-app.id_ed25519. Again, do the same for reverse-proxy.

Before running the next playbook, create hello-web-app-envrc. The playbook expects, for each app, a script called <app name>-envrc that sets environment variables. Although hello-web-app doesn’t expect anything top-secret as an environment variable, let’s pretend otherwise and create an encrypted hello-web-app-envrc file using the following command (using the same Vault password as before):

Just add

in the text editor that just opened, save it and exit. Now if you open hello-web-app-envrc, you should see a bunch of gibberish, as desired.

Do the same process for reverse-proxy-envrc, but leave it blank, since that app doesn’t have any environment variables.

Now you should be able to run the clone-projects.yml playbook (using a similar command as the setup.yml playbook). Afterwards, you should see the project repositories cloned to deploy’s home directory, each containing a .envrc file. The next steps will be getting set up with the two apps, and triggering our first builds!

Set up hello-web-app

The first app we will deploy will be hello-web-app. All it will do is render “Hello, <name>”, inserting the NAME environment variable. As you see in main.py below, it is a basic Flask app that renders a greeting when visiting localhost:5000/. We’re going to get the project working on your local machine before triggering the first deployment.

Next, let’s check out the Dockerfile:

Again, simple (and likely not optimal), but it gets our web app running. Also nothing to tweak here.

Now docker-compose.yml is more interesting:

The big stand-out is the network configuration. For our Docker Compose applications to talk to each other, we need to set up a Docker network for inter-application DNS resolution. In other words, we would like to resolve Docker image names to IP addresses that belong to other containers. You’ll want to run docker network create my_services now to create that network.

Another interesting thing from docker-compose.yml is that we don’t specify an image or Dockerfile location for our web service. This is because during development, we’d like to build images, and on our “production” server, we want to pull the image from Docker Hub. If we look at docker-compose.override.yml, we see build: . . So running docker-compose up --build will build the Flask image and start the app. On the remote server, the deploy scripts run the following to use the “production” configuration:

A bit more complicated… but it just:

  1. specifies which image to pull using the COMMIT variable, and
  2. sets the Docker Compose configuration by merging docker-compose.yml and docker-compose-prod.yml (see the rules here)

Now checking out docker-compose-prod.yml, we see that it pulls the image tagged with the current commit hash like I said — also be sure to put your Docker Hub account name on line 8!

Why not use the latest tag? Because if we always push to latest, we can’t revert back in case of any issues on prod. Keeping each deployed commit available to pull means we can roll things back if needed.

That wraps up hello-web-app. Let’s fire it up locally:

Navigating to http://localhost:5000, you should be greeted!

Set up reverse proxy

Let’s dig into the reverse proxy some. It’s just an NGINX instance that listens on port 80, and forwards requests to /name/ to our Flask app’s port 5000. It should be relatively clear how to easily add other routes for more apps in the future.

We actually don’t need to push any images for this app because we’re only customizing NGINX by mounting our desired nginx.conf into the app container (refer to its docker-compose.yml ). There are no environment variables either, so just run docker-compose up --build , and you should be able to navigate to http://localhost/hello/ and see your greeting again.

Trigger initial builds

Our remote server is ready to run our apps, but GitHub isn’t quite ready to deploy them. Again, we’re using GitHub Actions to do our CI, so let’s check out the configuration for hello-web-app (at .github/workflows/ci.yml):

The steps are:

  1. Checkout the code
  2. Build and run the app
  3. Get logs for debugging
  4. Run the tests
  5. Tag the image with the current commit hash, and push it to Docker Hub
  6. If on the master branch, execute the deploy script on the remote host

Notice secrets.DOCKER_USERNAME, secrets.DOCKER_PASSWORD, and so on? We need to add those under the repository’s Settings > Secrets — the same process we used to add the SSH_KEY secret. Add these to both GitHub repositories:

  1. DOCKER_USERNAME — your Docker Hub username
  2. DOCKER_PASSWORD — your Docker Hub password
  3. SSH_HOST — your remote machine hosting the apps
  4. SSH_PORT — your remote machine’s SSH port

Next, go to hello-web-app repository’s Actions tab and click the I understand my workflows, go ahead and run them button (if you do, in fact, understand them). Once that Action is finished, head over to the reverse-proxy repository to kick off the workflow (make sure to add the above secrets first).

Once reverse-proxy finishes deploying, navigate to https://your-remote-host/hello/ , and you should see a greeting!

Conclusion

That’s it! Now you have multiple Docker Compose applications being continuously deployed to your remote host, securely and with run-of-the-mill server tools like Ansible and SSH. It’s a simple setup (no automated rollbacks, no load balancing across horizontally-scaled services, and so on), but anyone with some Linux sysadmin experience can understand in an hour (I hope) how everything works together. Often, that’s more important.

--

--

Jay Hardee
The Startup

Freelance software developer | Aspiring mountaineer | Instant coffee connoisseur