nttlabs
Published in

nttlabs

Kubernetes driver for Docker BuildX

Docker BuildX, the extended version of docker build CLI, now supports distributed image building using Kubernetes!

I talked about this in Docker Mini Theater, held in Docker’s sponsor booth of KubeCon US 2019. Thanks Tibor Vass and the team for giving me this opportunity!

Docker Mini Theater, KubeCon US 2019 (photo by Kohei Tokunaga)

Wait, what is BuildX? Was BuildKit ditched?

No, BuildKit is not ditched. Rather, it is much alive. Actually, BuildX and BuildKit are different kinds of components; BuildX provides human-friendly CLI frontend on top of the BuildKit backend, which had lacked human-friendly and full-featured CLI.

The standard docker build has also supported BuildKit-mode ( DOCKER_BUILDKIT=1 ) since Docker 18.06, but the BuildKit-mode of docker build doesn’t implement some of important features of BuildKit, such as “max”-mode caching, multi-arch builds, and distributed builds.

BuildX supports almost all features of BuildKit, while preserving same user experience as docker build .

Kubernetes driver for Docker BuildX

BuildX now supports building images using BuildKit pods on a Kubernetes cluster, but why do we want to build images on Kubernetes?

There are two different kinds of motivations. The first one is for executing CI/CD jobs across several Kubernetes nodes, typically with Tekton, Argo, or Jenkins X. The second one is for improving developer experience.

Slide 5

Kubernetes integration for BuildX mainly focuses on the latter one. When you write codes on a laptop, building images may be painful because your laptop would have relatively poor CPU, RAM, network connectivity, and power supply. BuildX allows offloading your build workload to a Kubernetes cluster on the cloud with rich resources.

For CI/CD, Tekton templates for BuildKit might be a better fit: https://github.com/tektoncd/catalog/tree/master/buildkit-daemonless

BuildKit can be also deployed on Kubernetes cluster directly, without using BuildX or Tekton: https://github.com/moby/buildkit/tree/master/examples/kubernetes

Getting Started

Kubernetes driver for BuildX is already available on git master, but not available as an official release with binaries as of December 2019. So, currently you need to compile and install it from the source.

$ git clone https://github.com/docker/buildx.git
$ cd buildx
$ make binaries install

The binary is installed as ~/.docker/cli-plugins/docker-buildx . If you are using Docker CLI 19.03 (w/ experimental mode), the binary is recognized as a CLI plugin and can be invoked as docker buildx command. The binary can be also executed directly without using docker CLI.

To get started with docker buildx , you need to deploy BuildKit pods on the cluster using docker buildx create command. No need to write any YAML.

$ docker buildx create    \
--driver kubernetes \
--driver-opt replicas=3 \
--use

In the above example, three replicas of BuildKit pods are created. The replica to use is chosen using the hash of the path of the Dockerfile by default. (Note: Not the hash of the content of the Dockerfile). This “sticky” load-balancing mode allows reusing cache preserved in a pod volume for same build. This stickiness can be disabled by specifying --driver-opt loadbalance=random .

Slide 13

The CLI of docker buildx is almost same as docker build , but you need to specify --load flag in order to load the build artifact from the cluster into the local Docker daemon.

$ docker buildx build -t foo --load .

To push the image to the registry directly, you can use --push flag instead of --load .

$ docker buildx build -t example.com/foo --push .

You can also push the cache along with the image.

$ docker buildx build \
-t example.com/foo \
--push \
--cache-to=type=inline \
--cache-from=type=registry,ref=example.com/foo \
.

The image (example.com/foo) and the cache (example.com/foo:cache) can be also pushed separately. Separate cache supports “max”-mode, which is useful for complex multi-stage Dockerfiles.

$ docker buildx build \
-t example.com/foo \
--push \
--cache-to=type=registry,ref=example.com/foo:cache,mode=max \
--cache-from=type=registry,ref=example.com/foo:cache \
.

See https://github.com/docker/buildx for the further information.

We’re hiring!

NTT is looking for engineers who work in Open Source communities like Kubernetes & Docker projects. If you wish to work on such projects please do visit our recruitment page.

To know more about NTT contribution towards open source projects please visit our Software Innovation Center page. We have a lot of maintainers and contributors in several open source projects.

Our offices are located in the downtown area of Tokyo (Tamachi, Shinagawa) and Musashino.

--

--

--

NTT Open Source

Recommended from Medium

3 Things Go Needs Right Now More Than Generics

The Blind 75 Leetcode Series: Group Anagrams

What are the different ceremonies in an agile team?

83% done with my 2D game

How do programming languages work?

Data Lake Analytics Account and Permission System

What is Elastic Stack and Where to Use it

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Akihiro Suda

Akihiro Suda

A maintainer of Moby (dockerd), containerd, and runc. https://github.com/AkihiroSuda

More from Medium

Node Selector & Node Affinity

Consistent and Reliable Kubernetes Resource Definitions with cdk8s

Kubernetes: Controller Overview

Deploy and use ArgoCD with Portainer (part 2)