In Memgraph, we were always building packages for x86_64 architecture, but when Apple’s M1 chip became prevalent in the new laptops we had to adapt. The article on how we adapted to build our Debian packages for ARM64, the architecture of the M1 chips, would probably not be a hit, so let’s discuss how we accomplished supporting Docker images on different architectures.
It is important to mention that we are currently building two separate DEB packages — one for x86_64 and the other for the ARM64 architecture.
How to build multi-architecture Docker images?
Before we start with the heavy lifting, please stay calm and trust me it’s not as hard as all the fancy words make it out to be!
I’ve already mentioned Docker, right? Well, you definitely need to have it installed to be able to build Docker images. Everything I’ll be describing was done on Ubuntu 20.04 so if you are using some other distribution you might have to adapt a bit. Make sure your Docker version is at least 19.03. or greater.
First, enable the Docker CLI extension buildx
that will allow you to build images for multiple architectures. You can do that by following these steps:
Download the latest binary from their GitHub repository.
Move the downloaded binary (mine is buildx-v0.8.2.linux-amd64 ) into the path
~/.docker/cli-plugins like this:
mkdir -p ~/.docker/cli-plugins
mv buildx-v0.8.2.linux-amd64 ~/.docker/cli-plugins/
Note: Make sure the binary name is docker-buildx, you can rename it or link the one you downloaded to the docker-buildx in the same directory by running:
ln -s ~/.docker/cli-plugins/buildx-v0.8.2.linux-amd64 ~/.docker/cli-plugins/docker-buildx
Change the files permission, so it can be executable, by running:
chmod 755 ~/.docker/cli-plugins/docker-buildx
Now install the plugin by running:
~/.docker/cli-plugins/docker-buildx install
Voila, buildx
plugin is successfully installed. To check which architectures are now supported, run:
docker buildx inspect
And the output should look something like this:
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/386
But wait, these are only two architectures! Well, I did promise multi-architecture, just not a bunch of them. Just kidding — what you need to do now is to install a builder instance that will be used by buildx to produce images for multiple architectures.
The builder instance that I’ve used is from tonistiigi/binfmt repository, and to install it, just run the command mentioned in the README docs:
docker run --privileged --rm tonistiigi/binfmt --install all
And if you run the command docker buildx inspect
now, the output should look like this:
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6
And here they are — the two architectures we are interested in — linux/amd64 and linux/arm64
. Buildx automatically recognizes the builder image we have and can use it to build a Docker image. We also need to instantiate this builder instance with the following command:
docker buildx create --name super_builder --use
And to test out our new builder, let’s build Memgraph for both architectures using this Dockerfile (you can see the same Dockerfile in our repository):
FROM debian:bullseye
ARG BINARY_NAME
ARG EXTENSION
ARG TARGETARCH
RUN apt-get update && apt-get install -y \
openssl libcurl4 libssl1.1 libseccomp2 python3 libpython3.9 python3-pip \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN pip3 install networkx==2.4 numpy==1.21.4 scipy==1.7.3
COPY "${BINARY_NAME}${TARGETARCH}.${EXTENSION}" /
# Install memgraph package
RUN dpkg -i "${BINARY_NAME}${TARGETARCH}.deb"
# Memgraph listens for Bolt Protocol on this port by default.
EXPOSE 7687
# Snapshots and logging volumes
VOLUME /var/log/memgraph
VOLUME /var/lib/memgraph
# Configuration volume
VOLUME /etc/memgraph
USER memgraph
WORKDIR /usr/lib/memgraph
ENTRYPOINT ["/usr/lib/memgraph/memgraph"]
CMD [""]
The file looks like a standard Dockerfile, but the power of buildx
is to recognize for which architecture it is building and to use that knowledge to our advantage. For every platform argument we specify in the build command, buildx
will generate different variable values for the TARGETARCH
argument that we use in Dockerfile to copy the Debian package with the right architecture into our image and install it. You can check out other buildx
variables here.
The only thing we are missing are two Debian packages for AMD64 and ARM64, and we can get those easily by running:
curl -L https://download.memgraph.com/memgraph/v2.3.0/debian-11-aarch64/memgraph_2.3.0-1_arm64.deb > memgraph-arm64.deb
And now for the grand finale, lets build those images:
docker buildx build --build-arg BINARY_NAME="memgraph-" --build-arg EXTENSION="deb" --platform linux/amd64,linux/arm64 --tag test/memgraph:2.3.0 --output type=local,dest=$PWD/images .
In due time you should see the images for both AMD64 and ARM64 in the images directory. Well that wasn’t so difficult, right?
Do it in GitHubActions!
You know what they say — why spent 10 minutes doing something when you can spend 8 hours automatizing it. So let’s do just that — automatize the build process of Docker images. Let’s take a look at Memgraph’s release_docker.yaml workflow to see how it’s done. So the setup is pretty standard, so let’s go through the steps to see what’s happening:
- Check out the repository.
- Use already existing GitHubAction for setting up the QEMU emulator.
- And another GitHubAction for setting Docker
buildx
CLI extension. - Login to Dockerhub using credentials that need to be set up in GitHub secrets, to enable the upload of multiple images directly to Dockerhub
- Download Memgraph AMD64 and ARM64 Debian packages using curl.
- Finally, run the slightly modified Docker
buildx
command, the command now immediately pushes to the registry by replacing--output
flag with--push
. Wow, this was actually faster than what we did locally.
Which architecture naming convention to use?
And finally to answer the most troubling question of them all — the one that has caused so many Slack threads, disrupted so many casual coffee breaks and destroyed friendships — what naming convention to use? Well, as you can see, it’s complicated.
So our Debian packages all have architecture AMD64, which is by some also called x86_64, For example, our RPM packages of the same architecture have x86_64 architecture suffix. And for the ARM64 architecture we were conflicted should we use arm64 or aarch64 naming for Debian packages. After reading through some documentation, and after reading even more documentation on naming conventions of Debian and RPM packages, we found that usually, the Debian packages have an arm64 suffix while RPM has an aarch64 suffix. And since our Docker image uses Debian 11 base image, we went with arm64 suffix.
Therefore we are using x86_64 and aarch64 suffix for our RPM packages, and amd64 and arm64 for our Debian packages. And the good thing is that Docker follows the same convention for Docker images we use for Debian packages, and since we only have docker images based on Debian 11 — it all fits so nicely. Friendships restored!
Conclusion
After successfully going through these instructions, you are probably left wondering why would anyone do this locally when it’s so easier to do it with GitHubActions. I would say the biggest benefit of going through that on your own is to get more familiar with the technology and options you can use in the CI. But the go-to approach when releasing the product for multiple architectures should definitely be a CI solution, in this case, GitHubActions.
And now we can relax and be proud of the fact that our users, when pulling images from the Docker registry on whichever architecture they are currently using, will automatically get the one they should.