Building Elixir Applications: mix with nix — part 1

arvindpunk
Gamezop Tech
Published in
6 min readMay 8, 2024

A lot of our services are written in Elixir, and one of the major issues we had with deploying Elixir services were the CI build times. The CI set up was complicated, and we were highly restricted by the existing GitHub Actions available. This is where we saw potential in nix, a pure, functional package manager which solves a lot of our issues.

Some of the major issues we noticed were,

  • since caching is not fully supported yet, we were losing not only on build minutes, but precious developer time
  • ARM builds aren’t currently supported either — ARM instances costs (at least on AWS) were almost 50% of x64 instances

To tackle this, we could either write and maintain a Jenkins job on an ARM instance and build and maintain a Dockerfile along with some custom caching implementation. It would be one more piece of the puzzle to maintain and cross-dependency on the DevOps team to manage and update it.

The other potential solution was using nix to package and build the docker image itself. With a nix flake, we were able to solve both of our pain points at once. All elixir dependencies were individually cached on the GitHub Cache layer (thanks to the Determinate Systems’ Magic Nix Cache Action) consequently resulting in reduction of CI times by 50–60%. This also enabled us to build in ARM architecture (granted, we did have to use a self-hosted ARM runner, but we already had that set up for building Go/Node.js services on ARM).

The flake.nix file

If you’ve ever tried to set up a project, you know the feeling of setting up development environments. This is especially true for teams working on a single project. Some might have an older version of Elixir/OTP installed on their system, causing all sorts of issues while working on a project meant for a newer version of Elixir/OTP.

We decided to embrace nix flakes and pinned our entire development environment to the project repository itself.

devShells."aarch64-linux".default = pkgs.mkShell {
packages = with pkgs; [ elixir_1_15 mix2nix ];
};

The only requirement was to have nix installed, and then you could get a shell with all the required tooling by running nix develop. As of now, flakes are experimental, but a large part of the nix ecosystem is already using it and we love to play around with tooling that helps our organization.

mix2nix

mix2nix is a utility that generates nix expressions from a mix.lock file, making it extremely easy for mixRelease to consume as shown above. There are some drawbacks to it currently, we have to manually override and add any git dependencies (and it’s mix dependencies). We maintain a deps.nix file in the foo/foobar elixir repository which makes it easy for parent modules to consume the library via nix.

There are some caveats to using mix2nix, but it does help with dependency level caching — which will significantly improve our build speed.

Once we run mix2nix > deps.nix, it will generate nix expressions from the current state of the mix.lock file.

packages = with beamPackages; with self; {
amqp = buildMix rec {
name = "amqp";
version = "3.2.0";

src = fetchHex {
pkg = "amqp";
version = "${version}";
sha256 = "1439570336df6e79000239938fb055a0944dc9a768b4dec0af1375404508a014";
};

beamDeps = [ amqp_client ];
};

amqp_client = buildRebar3 rec {
name = "amqp_client";
version = "3.9.29";

src = fetchHex {
pkg = "amqp_client";
version = "${version}";
sha256 = "75b4f3c26d794fcafc82ceb9e245b3dca958a3a5fa60ff9ce26c879397fe77a6";
};

beamDeps = [ rabbit_common ];
};

...
}

For git dependencies, it’s a bit more complicated as generating nix expressions for private/git dependencies is not supported yet. While the mix.lock file contains the sha256 hash (essentially the git hash) of private dependencies — it does not list the transient dependencies. This can be handled by adding deps.nix in the upstream dependency (in our case, we had a private git dependency with common elixir modules across the organization). Once added, the same can be referenced using fetchFromGithub/fetchTree.

Building with nix

All the required puzzle pieces are now in place for us to build our application with nix!

nixpkgs conveniently ships with a set of beamPackages, which contains various different tooling for building BEAM/elixir based applications. We’re specifically interested in mixRelease

packages."aarch64-linux".default = beamPackages.mixRelease {
pname = name;
inherit version;
src = ./.;
removeCookie = false;
mixNixDeps = with pkgs; import ./deps.nix {
inherit lib beamPackages;
overrides = (self: super: {
foo = beamPackages.buildMix {
name = "foobar";
version = "1.0.0";
src = fetchTree {
type = "github";
owner = "bar";
repo = "foobar";
rev = "4e63e01ffcdfe5f4ca135fe84886c795e96259ae3";
};

beamDeps = [ ... ];
};
});
};
};

A lot of it is self explanatory (name, version, src, removeCookie — this is the Erlang release cookie. When building with nix, the default behavior is to remove the generated cookie to guarantee deterministic output).

Notice we provide the generated deps.nix to mixNixDeps — essentially giving nix context of all the individual dependencies. This also documents usage of overrides to add extra dependencies manually.

With everything set up, all we had to do was,

nix build

This results in a result directory (as all nix builds do), and we can start our application by via /bin/foobar start like you would if you had built it using mix release.

Dockerizing with nix

While this was enough to build the application, we wanted to go a little further and use nix’s dockerTools to build OCI images as well.

let
name = "foobar";
entrypointSh = pkgs.writeShellScript "entrypoint" ''
/bin/${name} eval "FooBarService.Release.migrate"
/bin/${name} start
'';
in {

packages."aarch64-linux" = {

svc = ...;

docker = pkgs.dockerTools.streamLayeredImag {
inherit name;
maxLayers = 10;
contents = [ self.packages."aarch64-linux".svc ];
tag = "local"
config.Cmd = [ "./${entrypointSh}" ];
};
};

apps."aarch64-linux" = {
docker = {
type = "app";
program = self.packages."aarch64-linux".docker;
};
};

No separate Dockerfile to maintain, everything consolidated neatly into the flake, but more importantly the generated images do not contain anything extra as it packages the closure of derivations required to run the service and nothing else!

There are some drawbacks to this we noticed. The final docker image built is over 200 MB in size, which previously used to be < 30 MB. While in today’s world 200 MB is not a lot, there definitely is room for improvement here.

CI

We use Github Actions and self-hosted runners to build our final docker images.

- run: |
nix develop .#ci
- run: |
nix run --access-tokens github.com=${{ secrets.GITHUB_TOKEN }} .#docker \
| gzip --fast \
| skopeo --insecure-policy copy docker-archive:/dev/stdin docker://organization/foobar:${{ github.run_id }}

To break it down,

  1. nix develop .#ci — drops us in a shell containing tools required to push to docker hub (or any other container repository), this is also described in the flake file as devShells
  2. nix run .#docker — builds and runs an application that streams the resultant binary data (OCI image layers) that constitutes an overall OCI image
  3. skopeo — copies the (compressed via gzip) binary data to docker hub (in our case)

Caching

The way we built our elixir application separates each of our mix dependencies into their own nix store paths, allowing the built dependencies to be cached.

The Magic Nix Cache Action makes it effortless to implement — as it auto-magically figures out the closure of dependencies and caches it on the GitHub Actions Cache layer, removing the need for setting up any other caching platform!

Benchmarks — building using mix release (top) and using nix build (bottom)

The difference was clear, roughly a 75% decrease in build times. The builds used to range from 8–10 minutes, and that was with some basic caching via the actions/cache. With nix build, it’s down to just 2 minutes — with still room to improve!

We’ve worked on improving the re-usability of this across multiple projects. The current state of the flake file looks pretty dissimilar but the underlying principle of what we’re doing remains the same and is captured by this blog post.

A special thanks to Norbert Melzer (aka NobbZ) for actively (and passively via tons of forum replies on both elixirforum and nixos discourse) helping out in all my nix and elixir queries!

Stay tuned for part 2 where we attempt to improve the caching efficiency even further.

--

--