Elixir Releases & Docker — The Basics Explained

You’ve heard about Docker, you’ve heard about Releases — but what exactly are they? Should you use them? Can they work together?

Deploying an application to a production server can be a tedious task. You always have to prepare and package your code for execution on the remote system first. This is, of course, no different for Elixir and Erlang applications. In this article, I want to explore two ways of packaging an application for deployment and explore the potential benefits of combining both approaches.

Packaging an Application with Erlang’s Releases

Like so many things we enjoy about Elixir today, Releases have already existed in the Erlang world long before Elixir ever saw the light of day. Releases have always been the go-to approach for deploying Erlang applications. But what exactly are they? How can you use them in an Elixir project?

What is a Release?

The Erlang documentation describes Releases as “complete systems” of one or more applications:

When you have written one or more applications, you might want to create a complete system with these applications and a subset of the Erlang/OTP applications. This is called a release.
- Chapter 10, OTP Design Principles

What does that mean? Releases are packages of compiled Erlang/Elixir code (i.e. BEAM bytecode). On top of that, they contain metadata and utility scripts for launching and managing the application as a whole.

Releases may also contain the Erlang runtime (ERTS, short for Erlang Runtime System Application). Releases that include ERTS are almost completely self-contained. They have no external dependencies except for true essentials such as libc or openssl (if you’re using the :crypto application).

The Anatomy of a Release

Most of the time you’ll be dealing with Releases in the form of .tar.gz archives. But let’s still take a quick look at the internal file structure of a Release:

/bin: This folder contains a script by the name of your application. This script is the main entry point for starting and managing your application.

/lib: This folder contains the compiled bytecode of your application and all its dependencies. If you have any additional assets in your project’s /priv folder, they will also be copied here.

/rel: This directory contains metadata about the Release. Among other things, this includes:

  • The Release Resource File (.rel file): A tuple with the version numbers of your application and its dependencies. It is similar in function and appearance to mix.lock.
  • The boot script (.script file): An Erlang script which launches the Erlang runtime and your application. It is compiled to a binary format that has the .boot extension.
  • The hooks directory: This folder includes shell scripts that run at certain points of the application lifecycle. Like good old init scripts, they are grouped into subfolders with self-explanatory names. A script in the pre_start.d folder, for example, will run before the application starts.

/erts-$version (optional): This folder includes the Erlang runtime if you choose to include it in your Release. ERTS contains all the files necessary to run the compiled version of your application. This means you don’t need to have Erlang and Elixir installed on the target machine.

Hot Patching Releases

Erlang and Elixir support dynamic software updating — better known as hot code (re)loading in the OTP world. This means you can update an application without ever stopping it. The BEAM virtual machine makes this possible because it can hold more than one version of a module in memory at the same time.

When you hot-load an updated version of a module, all processes still using the old version will keep using it until restarted or until they explicitly request the new version. Newly spawned processes, on the other hand, will automatically use the updated module.

Releases support hot updates out-of-the-box. When you create a Release that is an update to a previous Release, you can include an .appup file in the /rel folder. This file is used in the upgrade process to determine which modules should be updated and how.

By the way: If you have used iex (the interactive Elixir shell) before, you’ve probably done hot code loading already: The iex helper functions r/1, c/1, and recompile/ all do exactly that.

Creating Releases

The best way to create Releases for Elixir applications is arguably the Mix extension Distillery which is available on Hex. With Distillery, there is little need to worry about the internal structure of Releases or manually creating .rel and .appup files.

In an upcoming article for this series, I’ll explain how to create a Release from your project step-by-step.

Put it in a Container!

And now for something completely different.

Once you want to deploy your Elixir application, you’re faced with a choice: Where should your application run? There are many PaaS (platform as a service) and IaaS (infrastructure as a service) providers from which you could pick: AWS Elastic Beanstalk, Google App Engine, Heroku, or one of the various Cloudfoundry vendors, to name just a few of them. Alternatively, you could also opt for a simple VPS or even a dedicated server. No matter what you choose, you’ll be faced with many different deployment mechanisms required to get your code running. Almost every platform offers their own set of command-line tools, shell script collections and configuration files. Wouldn’t it be nice to have a way of deploying applications that works across different platforms and vendors?

In the physical world, there is a way to package things regardless of their properties and the method of transportation — let alone any specific shipping company. You already know what I’m talking about: Standardized Containers! Since the advent of intermodal containers —which can be transported by sea and by land — delivering wares from A to B has become much easier and hence much more affordable.

While metal containers have been around since the 1930s, it took them the better part of a century to become as omnipresent as they are today. Digital containers — more precisely Docker containers, on the other hand, were first presented to the public at PyCon in 2013, and have quickly gained massive popularity in since.

Isn’t Docker Just Virtual Machines for Fancy People?

Docker is not yet another virtualization mechanism joining the ranks of VirtualBox, QEMU or Xen. Simply put, when you’re using traditional virtualization, a whole other computer is simulated — complete with processor, graphics card, networking interface and input devices. In order to run even the simplest application on such a virtual machine, you also need to run a complete operating system with its own kernel and drivers that interact with the simulated hardware.

All Docker containers in this example share the same underlying kernel. Two containers share the same image.

Docker does no such thing: It is but a thin layer on top of an operating system such as Linux or Windows that manages the execution of containers. All containers on a machine share the kernel with the host OS but not its system libraries. This means that a container doesn’t need to include hardware drivers or even a full operating system. Instead, all you need is the executable you want to run and its dependencies. In a best-case scenario, this means a single binary combined with its libc would be enough to make a working Docker container.

Containers and Immutable Images

At this point, we need to add a small but important distinction and introduce a new term: Images. Images are immutable bundles of assets (code, configuration, etc) that can be shared by multiple containers. Containers, on the other hand, are really instances of an image with an additional read/write layer. This means, when we assemble code to be run using Docker, we are creating an immutable image which can then be used to spin up one or several containers.

Docker images are created automatically from so-called Dockerfiles. These files contain a series of simple commands describing how an image should be set up. Here is a minimal example:

FROM alpine:latest
RUN apk --no-cache add elixir

First, we specify that this image should be based on the latest version of Alpine Linux. Then we want to run the package manager apk in order to install Elixir. Finally, we specify the iex executable to be the ENTRYPOINT of the image so that it will run automatically when we start a container from this image. And that’s all there is to it. Now we have a fully self-contained Elixir environment in a Docker image. It takes up less than 35 MByte of disk space and can run on any Linux machine.

There is, of course, quite a bit more to be said about creating Docker images — especially when preparing them for deployment in production — but I will write more about that in a future post.

Releases and Docker Images —A Good Fit?

Releases and Docker images — two concepts that are — technically — completely unrelated. But the true power of Releases and Docker images comes from combining them.

Repeatable and atomic deployments have become a cornerstone of modern software development. Releases are an excellent foundation for repeatable deployments since they provide an easy way to bundle up a full Elixir project. Wrapping Releases into Docker images ensures compatibility between target systems and will save you a lot of headache from the problems that come with incompatible system libraries.

Now, how do you best put your Release in a Docker image? I will explore this question in depth in an upcoming article. If you can’t wait until then, feel free to check out mix_docker for yourself.

This article is part of an ongoing series about developing and deploying Elixir applications to production. The next articles will cover the practical aspects of creating Releases and Docker images for Elixir applications.

In this series I am sharing my experience from creating DBLSQD, a software release and update server written in Elixir. Check it out, there is a 60-day no-strings-attached free demo: