Elixir + Kubernetes = đź’ś (Part 2)
How To Set Up and Auto-Scaling Elixir Cluster with Elixir 1.9
This is Part 2 of a three part series on how to create and deploy an Elixir application that will automatically scale on Kubernetes. Part 1 and Part 3 are also available. If you just want to see the source code it’s available here: https://github.com/groksrc/el_kube and if you just want a summary of the commands the slide deck from the original talk is available here.
In Part 1 of this series, we created an Elixir application that is ready to be deployed in an erlang cluster configuration. The next step (Step 12) is to create the Docker container for the app and make sure everything spins up. In this post I’m using a default install of the latest Docker for Mac, but this should work on Linux and Windows as well.
12: Create the Dockerfile and .dockerignore
Let’s start by creating a Dockerfile and .dockerignore in the root of the folder:
$ touch Dockerfile && touch .dockerignore
13: Edit the Dockerfile and .dockerignore
Now paste in the content for the Dockerfile and we’ll look at what it does:
FROM elixir:1.9 AS builderENV MIX_ENV=prodWORKDIR /usr/local/el_kube# This step installs all the build tools we'll need
RUN curl -sL https://deb.nodesource.com/setup_13.x | bash - && \
apt-get install -y nodejs && \
mix local.rebar --force && \
mix local.hex --force# Copies our app source code into the build container
COPY . .# Compile Elixir
RUN mix do deps.get, deps.compile, compile# Compile Javascript
RUN cd assets \
&& npm install \
&& ./node_modules/webpack/bin/webpack.js --mode production \
&& cd .. \
&& mix phx.digest# Build Release
RUN mkdir -p /opt/release \
&& mix release \
&& mv _build/${MIX_ENV}/rel/el_kube /opt/release# Create the runtime container
FROM erlang:22 as runtimeWORKDIR /usr/local/el_kubeCOPY --from=builder /opt/release/el_kube .CMD [ "bin/el_kube", "start" ]HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=2 \
CMD nc -vz -w 2 localhost 4000 || exit 1
At the top of the file we start with an Elixir base container image and set our MIX_ENV environment variable to prod
and configure our working directory to /usr/local/el_kube
For those not familiar with Docker, you can think of it like this. The docker runtime reads in the Dockerfile and executes each command from top to bottom. So first it starts the base container image, then it sets the environment variable. The WORKDIR
command creates a directory at that path and changes into it.
We then execute the bash script that follows the RUN
command to add our dependencies necessary to compile the app.
Next we COPY
the code that’s in our working directory (where we ran the command) into the container and compile it. Then we compile the javascript and call the mix release
command to generate the release.
ProTip: Don’t forget the .dockerignore file. You don’t want the _build directory from your development host winding up in the container. We’ll add a .dockeringore next.
Next, you find another FROM
command. What is this you ask? Great question! It’s a two-phase container build. Huh? Right. So each line of the Dockerfile creates a “layer” in Docker parlance that is used to compose the container. In our release container we don’t actually want or need all of the dependencies necessary to compile the app. We just want the output of all of that.
So in the first phase of the build we compile, and in the second phase we copy the results from the first phase and include only the dependencies we need to actually execute the application. The second phase of the container build process starts with the second FROM
command and pulls the base erlang:22-alpine image (which is the base of the first container) and copies in the one dependency on gcc
along with the output from the first phase of the build.
Then we set the CMD
to execute at startup and end with a HEALTHCHECK
. The HEALTHCHECK
here is didactic, you don’t really need it but it’s nice to show. Kubernetes will run the command in the health check at the configured interval, and if it fails the specified number of times it will kill the container. In practical terms what it means for this project is that the container would fail and a new one would be brought up and automatically rejoin the cluster. Pretty cool!
Now that we’ve looked at the Dockerfile let’s throw in a few lines to the .dockerignore:
_build/
.elixir_ls/
.git/
.vscode/
deps/
priv/static/
k8s/
test/
.dockerignore
.env
.formatter.exs
.gitignore
.travis.yml
Dockerfile
README.md
This is my .dockerignore so customize as you see fit. Just don’t remove that top line! Now we’re ready to build the container.
14: Build the container
Issue the following command to build the container. You should see the success messages that follow.
$ docker build -t el_kube:latest .
...
Successfully built 1a5ed90cc8c0
Successfully tagged el_kube:latest
15: Smoke test the container
Now let’s see if it works. We’re going to use Docker to create a private network, a Postgres container and our app container to make sure we’re all communicating. The benefit of creating the private docker network is that all containers on the network will be able to access any port, and you’re not going to conflict with your host. You could, of course, use the default bridge network, but then you’d have to use the deprecated --link
argument to get the containers talking. More details about this are available in the docker network docs.
$ docker network create el-kube-net
80a3e59c2cc82c7d2e4aff2d2a8a2daf14f3731f0e7604777be5f925b99b0006
Now that the network is available, let’s start a Postgres container and join it to the el-kube-net
network.
$ docker run --rm -d \
-h db \
-e POSTGRES_DB=el_kube_prod \
-p 5432 \
--name db \
--network el-kube-net \
postgres:9.6
116714d3b9b03f8730bd2c942d092ea3c2ad0d97fc955e656e8e73c96ca40ca0
Here we’re telling docker to run the postgres:9.6 image. The --rm
flag will remove the container files when the container stops. The -d
flag sets it to run as a daemon. The -h
flag is for the DNS name of the container inside the el-kube-net
network and setting this allows other containers to refer to it as db
. The -e
flag is nice because here we’re passing the POSTGRES_DB
environment variable. The Postgres container will look for this variable to be set and if it finds it then it will automatically create a database with the supplied name after the database engine starts. No need to issue a CREATE DATABASE
command.
The --name
flag tells Docker what this container will be named. That’s different than the -h
flag which is how it’s referred to inside the network. This is from outside the network at the terminal on your Docker host. And finally, the --network
flag joins the container to the user-defined bridge network we created above.
Finally! Let’s start the app. We pass all of the environment variables describe in Part 1 of the series, in addition to the --network
and --publish
flags. The --publish
flag I didn’t cover previously, it exposes port 4000 on the host (outside the docker network) to whatever is bound to port 4000 inside the network, allowing traffic to pass through.
$ docker run -it -d --rm \
-e DB_URL=ecto://postgres:postgres@db/el_kube_prod \
-e RELEASE_COOKIE=secret-cookie \
-e SECRET_KEY_BASE=your-secret-key \
-e SERVICE_NAME=el-kube \
-e APP_HOST=localhost \
-e PORT=4000 \
--network el-kube-net --publish 4000:4000 el_kube:latest
f6cba46a29c079284b7625554105640e4a5beaff1bdf417d6fa623c26948d26b
If you got that final hash back it’s a good sign. Let’s make sure our containers are up:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6cba46a29c0 el_kube:latest "bin/el_kube start" 9 seconds ago Up 8 seconds (health: starting) 0.0.0.0:4000->4000/tcp reverent_hugle
116714d3b9b0 postgres:9.6 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:32768->5432/tcp db
This is a bit ugly, but if you see two containers you’re in pretty good shape. Make sure you can again browse to http://localhost:4000. If so, you’re in business. The container is working and now we’re ready to spin up on Kubernetes.
EDIT 01/2020: I recently updated the source code to make it easier to smoke test the application. Now all you need to do is:
$ docker-compose up
See the source code repository for the docker-compose.yaml and docker.env files that make this work
In Part 3 of the series, we’ll put these first two pieces together and get our erlang cluster running on our minikube cluster. It’s a cluster of clusters. You might even say, “it’s a real cluster.”
-g