Guide to multi-architecture Docker images

Siim Talts
Scoro Product Engineering
7 min readOct 13, 2022

Introduction

Apple’s ARM-based M1 chip was released in the autumn of 2020. There was a lot of buzz around its performance and increased battery life, as well as developers using ARM CPUs for their everyday tasks. Now, two years later, we can safely say that ARM is a viable option for mobile and desktop devices, and it’s here to stay.

We at Scoro have been running M1 machines and experimenting with them pretty much from day one, and for us, the transition to a new CPU architecture has been quite smooth.

Nevertheless, it has created a bit of a challenge when we need to work with Docker images because now both x86 and ARM-based machines need to be supported. In this article, I will go over ways to achieve multi-architecture Docker builds and provide step-by-step instructions on how to conquer all types of machines, regardless of their choice of CPU vendors.

The new challenge

While it’s been interesting to follow this change in terms of new and updated technology, it has also created some extra things for us as developers to worry about — as if there was a lack of those in the first place… In addition to keeping up with the changes with front-end frameworks and best practices on how to achieve zero downtime or perfectly scalable deployments, one now also needs to consider which CPU architecture to rely on. In the good old days (read: about two years ago), the x86 instruction set ruled as king. As a result, a lot of dependencies used for running web services have been built around that, neglecting the fact that there are actually quite many alternatives to x86, such as ARM. One of the resulting challenges is related to docker images and, more specifically, Dockerized development environments. With developers using both classic x86 CPUs and Apple M1 chips, the environment needs to support both.

Luckily, Apple did think about the fact that x86 is still very much in use and created a wonderful application called Rosetta. In short, it is an emulator that allows you to run the good old x86 applications on Apple’s ARM chips. It’s not enabled by default, but you can install it via terminal with:

$ softwareupdate --install-rosetta

After that, emulating x86 applications (including Docker images) is enabled.

But this is only a clever workaround, not a cure-all solution for all problems. Keep reading to discover a smarter way to run Docker images in ARM-based Macs.

Multi-architecture builds with BuildX

To make proper multi-architecture docker images, we need to use a Docker plugin called BuildX. It should be already pre-installed with newer Dockers, but you can check if it exists with the following command:

$ docker buildx version

If this returns a version number, you’re all set and ready to start. If it returns an error, follow the steps described in the Docker manual to install Buildx for your machine.

To see all the currently available builders, run the following command:

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
default * docker
default default running linux/amd64, linux/386

This output shows that the current default driver is for linux/amd64, linux/386 platforms, meaning the good old x86. Let’s assume we have the following Dockerfile:

FROM alpine
RUN echo "Building a new image!"

Now, if we want to build for both x86 and ARM, we need to define the platforms. We’ll use linux/amd64 for x86 and linux/arm64 for ARM. Refer to this page for an overview of all the supported architectures.

There are multiple ways to use Buildx to build multi-platform images, but the easiest one is as follows:

$ docker buildx create --use
$ docker buildx build --platform=linux/arm64,linux/amd64 .

The first command adds a new builder, the second one binds the platforms to it and builds our new images. The --platform property accepts a comma-separated list of all the target platforms you want to build for. If we now list all the available builders, it should look something like this:

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
exciting_banach * docker-container
exciting_banach0 unix:///var/run/docker.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default docker
default default running linux/amd64, linux/386

The builder called exciting_banach has both x86 and ARM platforms available. This multi-CPU build is also displayed in the build job output:

$ docker buildx build - platform=linux/arm64,linux/amd64 .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use - push or to load image into docker use - load
[+] Building 7.9s (9/9) FINISHED
=> [internal] booting buildkit 3.5s
=> => pulling image moby/buildkit:buildx-stable-1 1.8s
=> => creating container buildx_buildkit_exciting_banach0 1.7s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 82B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/alpine:latest 3.0s
=> [linux/arm64 internal] load metadata for docker.io/library/alpine:latest 3.0s
=> [linux/amd64 1/2] FROM docker.io/library/alpine@sha256:0x0 0.8s
=> => resolve docker.io/library/alpine@sha256:0x0 0.0s
=> => sha256:0x0 2.80MB / 2.80MB 0.5s
=> => extracting sha256:0x0 0.2s
=> [linux/arm64 1/2] FROM docker.io/library/alpine@sha256:0x0 0.6s
=> => resolve docker.io/library/alpine@sha256:0x0 0.0s
=> => sha256:0x0 2.69MB / 2.69MB 0.4s
=> => extracting sha256:0x0 0.1s
=> [linux/arm64 2/2] RUN echo "Building a new image!" 0.5s
=> [linux/amd64 2/2] RUN echo "Building a new image!"

Notice how all the steps are doubled.

To improve this further, we can define the “build platform” in our Dockerfile. This helps us use assets designed specifically for either x86 or ARM if the language has that kind of support for cross-compilation. For this, we have theBUILDPLATFORM and TARGETPLATFORM arguments and the --platform flag available:

FROM --platform=$BUILDPLATFORM alpine
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM"

If we build now, we can see the following output:

$ docker buildx build --platform=linux/arm64,linux/amd64 .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use - push or to load image into docker use - load
[+] Building 2.6s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 186B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/alpine:latest 1.9s
=> CACHED [linux/amd64 1/2] FROM docker.io/library/alpine@sha256:0x0 0.0s
=> => resolve docker.io/library/alpine@sha256:0x0 0.0s
=> [linux/amd64 2/2] RUN echo "I am running on linux/amd64, building for linux/amd64" 0.6s
=> [linux/amd64->arm64 2/2] RUN echo "I am running on linux/amd64, building for linux/arm64"

Now we can push the image to our Docker images registry, and Docker will take care of providing the correct image to the correct platform. You can actually combine the build and push commands into one:

$ docker buildx build 
--tag foo/alpine:bar
--push
--platform=linux/arm64,linux/amd64 .

Production-grade workflow

If we combine the knowledge about multi-architecture Docker builds with CI/CD pipelines, we get a truly production-grade workflow. We use Gitlab in Scoro, so the following example is for Gitlab pipelines, but these principles apply to any CI/CD platform.

Let’s use the same Dockerfile as in the previous example. We could have a Gitlab CI pipeline file like this:

.gitlab-ci.yml

variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stages:
- build
- release
Build alpine test:
image: docker:latest
services:
- docker:dind
stage: build
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- docker buildx create --use
- docker buildx build -t $CI_REGISTRY_IMAGE/alpine:$CI_COMMIT_REF_NAME --push --platform=linux/arm64,linux/amd64,windows/amd64 .
only:
- merge_requests
Release alpine:
image: docker:latest
services:
- docker:dind
stage: release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- docker buildx create --use
- docker buildx build -t $CI_REGISTRY_IMAGE/alpine:latest --push --platform=linux/arm64,linux/amd64,windows/amd64 .
only:
- master

This pipeline can be used with a regular Gitflow branching model, where you have your feature branches from which you create a Merge Request / Pull Request to target themaster branch, and once things are merged, they are considered released. So the build step runs on all commits, but release runs only on master once things are merged. After we merge to master and release a new alpine:latest image, we can use the docker manifest inspect command on the image to validate if it is truly a multi-architecture build or not:

$ docker manifest inspect foo.bar.com/testing-docker/alpine:latest
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 734,
"digest": "sha256:0x0",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 734,
"digest": "sha256:0x0",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 734,
"digest": "sha256:0x0",
"platform": {
"architecture": "amd64",
"os": "windows"
}
}
]
}

We can see that the image manifest contains records for three architectures, so our multi-architecture image publishing pipeline has worked successfully.

Summary

While multi-architecture Docker images aren’t yet a “must-have” part of the image publishing pipeline, they certainly aren’t a very complex thing to add either. Depending on the application in use, the proper version of the image might result in significant performance gain. One such case is documented in this article:

We saw the value in having a fully functioning Podman example in order to compare to Docker Desktop, so we built Kratix as an ARM64 image and saw significant performance improvement on Podman.

If you like Docker and you enjoy getting the best out of the tools that a developer needs daily, make sure to check out our open positions at https://www.scoro.com/careers/.

--

--