Building Docker Images Without Docker

Sebastien Goasguen
Bitnami Perspectives
4 min readJul 17, 2017

Once upon a time, say ~10 years ago, some of us dealing with virtual machines used tools like kpartx or guestfs to access virtual machines root filesystem from the root disk image. This was so much fun !

This is a good reminder that “everything in linux is a file” and that at the end of the day even a Docker image is a set of files. Building a Docker image is actually all about building a root filesystem that a process will use. So there should be a relatively simple way to build a Docker image without having to rely on the Docker daemon !!! Shouldn’t there be ?

There are approaches like source to image but recently I have looked at Bazel and its Docker rules.

Bazel, Basel or Basil

Bazel is a build system open sourced in 2015 by Google. It is the open sourced version of their internal Blaze system. With just a letter permutation in the name. I have no clue how to pronounce it properly, maybe it is Basel like for the Swiss town or maybe it is Basil like the culinary plant.

Bazel is used in Kubernetes and TensorFlow and we are seeing it pop-up in more and more projects. So no more ./configure, make, make install people, get with the Bazel it is 2017. Plus you want the speed, the cross language support, the reproducibility and the scale.

Docs are good, installation is straightforward , get openjdk , configure the bazel build repo and then

apt-get install bazel

Then you need a WORKSPACE and a BUILD file, just follow the Getting started guide. In every workspace it seems that you get a Bazel build server (yes there is a server), the server gives you caching of the artifacts etc, and can reduce the build time. The Bazel client just uses the server to make the build. Even though I did everything locally, I am sure you can configure your client to talk to a remote Build server.

Building Docker Images

This all cool and dandy, but what really triggered this post is the availability of some new Bazel build rules to manipulate Docker images. These rules are not part of the standard release of Bazel but you can easily load them into your workspace.

The main concept being that you can write a BUILD file that defines your Docker image. Then let Bazel construct the file system of the image. All of this without the Docker daemon.

Which means…No need for Docker in Docker to do builds.

I made some basic images in https://github.com/bitnami/bazel_containers , you will see that my WORKSPACE starts by loading the Docker rules and then pulls a Docker base image that will use later (you can technically pull from any registry, here I pull bitnami/minideb-extras:jessie from Docker hub :

With that WORKSPACE configured, you can now write your BUILD file. The BUILD file will look almost exactly like that a Dockerfile with curly braces and commas :) The biggest difference being that the base image will be a reference to an image pulled in your WORKSPACE setup and that any packages installed in the image needs to come from a Bazel rule. Plus you can use any other Bazel rule in that same BUILD file and reference the output inside the specific docker_build rule.

This is a big limitation right now, as we can only install Debian packages and that their post installation configuration is still a bit of a mystery to me, since Bazel simply grabs the debian packages (in the WORKSPACE) and then expands the tarballs in a file system that will make up the Docker image.

For instance the glibc package used in the gist above is pulled in the WORKSPACE by and http_file rule:

http_file(
name = "glibc",
sha256 = "bdf12aa461f2960251292c9dbfa2702d65105555b12cb36c6ac9bf8bea10b382",
url = "http://deb.debian.org/debian/pool/main/g/glibc/libc6_2.19-18+deb8u9_amd64.deb",)

My README is pretty good :) so just follow it. Do the build with bazel build then you will be able to load the image into your local Docker for testing. Indeed a script is provided as output of the build to load the file system as a Docker image.

If you want to push the image, you can do so with bazel run //nginx:push_nginx but it might be a bit tricky to work around the credentials. This small Authorization section is key.

So there, you can build a Docker image without Docker. Now the possibilities become quite interesting. Here is one: run your Docker builds as Kubernetes Jobs

Docker Builds As Kubernetes Jobs

I have created a simple Dockerfile to create a Docker image that contains Bazel. You can pull it docker pull gcr.io/skippbox/bazel .

Since this image has Bazel and that we can build and push a Docker image without having the Docker daemon. This means that I can do the build and push from within this image without Docker in Docker. Which means that I can do all of this as a Kubernetes Jobs. The manifest below shows this. It uses my bazel Docker image, it runs the bazel build //nginx:nginx command, having git cloned the repo using the gitRepo volume plugin.

Work to be Done

The job above does not push. We need to configure the authentication token with gcr, to be able to do the push, or check how to get the Docker hub credentials. Also, it would be better to run a Bazel server as a real k8s deployment, and just run builds that use this bazel server, hence we could take advantage of the caching.

Definitely check it out, as we go back to ./configure && make && make install except that it smells lovely in the Kitchen :)

--

--

Sebastien Goasguen
Bitnami Perspectives

@sebgoa, Kubernetes lover, R&D enthusiast, co-Founder of TriggerMesh, O’Reilly author, sports fan, husband and father…