Running Elixir in development with Docker and docker-compose

Poff Poffenberger
6 min readJan 8, 2023

--

My first experience with Elixir was on a research project for my job where I was tasked with identifying a potential replacement for our python Flask APIs. Our python APIs are all dockerized and we use docker-compose for development to make it easy to quickly standup an environment that has all the core services and a Postgres db. Using docker-compose in development environments has benefits besides just standing up a dev stack quickly (for example, it can be helpful to closely replicate production environments) but this was an important feature for us especially considering we weren’t going to be replacing our Flask APIs with Phoenix APIs overnight and we were looking to minimize friction as we considered switching over.

For this project, I was building a simple Phoenix API that used Ecto (the big Elixir lib for talking with databases) to persist data in Postgres. Since this is a dev environment, I wanted automatic code reloading/compiling on changes to avoid having to restart/rebuild each time I changed something. Additionally, Elixir/Phoenix/Ecto have a bunch of generators (mix ecto.gen.migration to easily create migrations, for example) and I wanted to be able to use them in this project.

So, let’s get started by taking a look at the Dockerfile for development:

FROM elixir:1.14

ARG USER_ID
ARG GROUP_ID

EXPOSE 4000

RUN apt-get update && \
apt-get install -y postgresql-client && \
apt-get install -y inotify-tools

# Add an api_user so files created are not owned by root
RUN groupadd -g ${GROUP_ID} api_user
RUN useradd -l -m -u ${USER_ID} -g api_user -s /bin/bash api_user && su api_user -c 'mkdir -p /home/api_user/app'
USER api_user

WORKDIR /home/api_user/app

RUN mix local.hex --force && mix local.rebar --force

# Install dependencies so the image is ready to go
COPY --chown=api_user:api_user mix.exs mix.lock ./
RUN mix deps.get
RUN mix deps.compile

This isn’t terribly complicated but let’s break it down. After we describe what docker image we want to use, use specify a few build args for our container:

ARG USER_ID
ARG GROUP_ID

More on this in a bit.

Next, we expose the port our API will be running on then we install some dependencies.

RUN apt-get update && \
apt-get install -y postgresql-client && \
apt-get install -y inotify-tools

postgresql-client is going to be used in a bit when we need to wait for the database to come online and inotify-tools will help us automatically reload when our code changes. Phoenix actually will set this up for us out of the box but we need to have inotify-tools installed first.

Now, things are about to get interesting. I said this before but I want to make use of the generators that come with Elixir/Phoenix/Ecto. In earlier revisions of my dockerfile, I would attempt to generate a migration with something like mix ecto.gen.migration hello and it would create the migration fine except for the fact that the new files were created and owned by root since the Docker daemon was running as the root user by default. This meant that in my editor, if I tried to modify the new migration, I’d get a permission denied error. Of course, I could just sudo chown the file or whatever but that is a pain and disrupts the coding flow if I have to do that for every new migration generated.

RUN groupadd -g ${GROUP_ID} api_user
RUN useradd -l -m -u ${USER_ID} -g api_user -s /bin/bash api_user && su api_user -c 'mkdir -p /home/api_user/app'
USER api_user

What we’re doing here instead is creating a new user in the docker container that will have permissions (and group) that are set to equal whatever the GROUP_ID/USER_ID build args are set to. We want these set to the ids that match the user who will be doing the editing. More on how we pass that in later.

After that whole deal, we set our work directory to the app/ directory we just created for our new user, install some dependencies for mix and then there’s more fun:

COPY --chown=api_user:api_user mix.exs mix.lock ./
RUN mix deps.get
RUN mix deps.compile

The COPY command copies our mix.exs and mix.lock file to the correct place. By calling it with chown we make sure that we (the editing user) can make changes to these files from both inside and outside the container. Then, we’re pulling our deps and compiling them. It’s not strictly necessary to compile the dependencies in the dockerfile but I found that doing so here meant that startup time of the app was much quicker since you didn’t have to re-compile deps each time.

Great! Dockerfile done. Let’s take a look at how we’re going to startup the Phoenix server. This is a file I added called entrypoint.sh

#!/bin/bash
set -e

# Ensure the app's dependencies are installed
mix deps.get

# Wait until Postgres is ready
while ! pg_isready -q -h $PGHOST -p $PGPORT -U $PGUSER
do
echo "$(date) - waiting for database to start"
sleep 2
done

# Create, migrate, and seed database if it doesn't exist.
if [[ -z `psql -Atqc "\\list $PGDATABASE"` ]]; then
echo "Database $PGDATABASE does not exist. Creating..."
createdb -E UTF8 $PGDATABASE -l en_US.UTF-8 -T template0

# This migrates the db and then seeds the db
mix ecto.setup

echo "Database $PGDATABASE created."
fi

mix phx.server

This was modified from a really helpful post on the Elixir Forum

Each time we run this, we attempt to fetch our dependencies again (in case a new dep was added in our mix.exs) before waiting for the Postgres db to come online (this is thanks to the postgresql-client we installed in our Docker image). Once Postgres is running, we look to see if our database (defined in our environment vars as PGDATABASE) exists. If it does not, we create it and run mix ecto.setup which runs our ecto migrations for us and then seeds the database (if there’s anything defined in priv/repo/seeds.exs). Finally, we start up the Phoenix server. Again, Phoenix by default will attempt to reload on code changes.

Now, let’s look at wiring all this together with a docker-compose.yml:

services:
api:
build:
context: .
dockerfile: Dockerfile
args:
USER_ID: ${USER_ID:-1000}
GROUP_ID: ${GROUP_ID:-1000}
environment:
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: db
PGHOST: db
PGPORT: 5432
ports:
- "4000:4000"
depends_on:
- db
volumes:
- api_deps:/home/api_user/app/deps
- api_build:/home/api_user/app/_build
# Explicitly excludes _build and deps directories from bind mounts
- ./api/config:/home/api_user/app/config
- ./api/lib:/home/api_user/app/lib
- ./api/priv:/home/api_user/app/priv
- ./api/test:/home/api_user/app/test
- ./api/entrypoint.sh:/home/api_user/app/entrypoint.sh
- ./api/mix.exs:/home/api_user/app/mix.exs
- ./api/mix.lock:/home/api_user/app/mix.lock
- ./api/.formatter.exs:/home/api_user/app/.formatter.exs
command:
- "./entrypoint.sh"
db:
image: postgres:15.1
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
api_deps:
api_build:

This is all fairly straightforward docker-compose stuff but two points I wanted to call out.

First, under the build configuration for our api, we’re passing in our USER_ID and GROUP_ID from environment variables so that our Docker image for the API builds with the correct permissions like we talked about.

Second, you’ll notice a bunch of those files and directories being mounted in our container. I’m intentionally used named volumes for the deps and _build directories because we want our deps and built files to persist across container restarts but we also don’t want them existing outside the docker container on the host machine. After those two definitions, we manually mount all the other files our app will need to run. This lets our host filesystem share the files with the running Docker container so we can edit the code on the host machine and Phoenix will automatically pick up the changes in the container and reload our API. Neat!

The last piece of connecting glue is a convenience script I made to set the USER_ID and GROUP_ID variables for docker-compose to use when building the image.

#!/usr/bin/env bash

# Set environment variables for building the container
# to ensure the correct user permissions are set
export USER_ID=$(id -u ${USER})
export GROUP_ID=$(id -g ${USER})

docker-compose -f docker-compose.yml $@

We grab and set the variables so that if/when docker-compose needs to build our API image, it’s set automatically. With this script, that means you can run docker-compose with ./dkc_dev instead of docker-compose. So, for example, if I wanted to run my Elixir tests, I can call ./dkc_dev run api mix test. Or, if I want to stand everything up: ./dkc_dev up

So, that’s that. It looks like a lot (and it kinda is) but it should allow for any developer to clone your repo and run ./dkc_dev up and have an immediately usable and seeded developer environment with reloading on code changes.

One major downside of this approach is that if you’re going to be running ElixirLS for Elixir language support and debugging (in VS Code for example), you’re going to need Elixir/Erlang installed on the host machine anyway.

If I was setting up a fresh, clean, new project, I would likely skip this whole setup and just run Elixir on my host machine (especially considering how easy it is to get install and manage with asdf). But if you need your Elixir app running in docker-compose, here you go!

Hope this helps!

--

--

Poff Poffenberger
Poff Poffenberger

Written by Poff Poffenberger

Senior Software Engineer. I have experience in Typescript and Python but now I'm having fun exploring the wild world of Elixir. https://poff.bingo

No responses yet