Automating Docker Builds in Kubernetes

aka, what would it take to build Google Container Builder (Part II, part I was _building with kaniko_).

Last week I wrote a short post about using kaniko to build Docker images without the Docker daemon and within unprivileged containers.

I thought I would write within the hour about how to do it within a Kubernetes Pod, but I got hit by a very weird bug (still unsolved).

The idea is simple, since you can build a Docker image within an unprivileged Docker container, this means that you can do this within a Kubernetes Pod. Not only can you build a Docker image but you can also push to a remote registry (e.g Docker hub).

Which means, that you can setup your Docker images continuous building system within Kubernetes without reverting to some insecure Docker socket mounting.

Most people would set this up using a GitHub webhook. I am not a fan because you expose a public endpoint to the world, you need to secure it, you might get DDoS etc. I like the poor man Jenkins/GitHub webhook/Docker hub automated build solution: Just use a plain old CronJob

A CronJob will pull your source code on a schedule, build your container and push the resulting image. Of course there are downsides to this: you might push the same image, you will use more resources etc. But it is also super easy.

The CronJob

Stick the following into a file cronjob.yaml or if you are french call it wearethechampions.yaml


apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: build
spec:
schedule: “*/1 * * * *”
jobTemplate:
spec:
template:
spec:
initContainers:
— name: init
image: gcr.io/cloud-builders/git
args:
— “clone”
— “https://github.com/sebgoa/functions.git
— “/tmp”
volumeMounts:
— name: source
mountPath: /tmp
containers:
— name: kaniko
image: gcr.io/kaniko-project/executor
args: [“ — dockerfile=/workspace/Dockerfile”,” context=/workspace/”,” — destination=docker.io/runseb/barbaz”]
volumeMounts:
— name: source
mountPath: /workspace
— name: docker
mountPath: “/kaniko/secrets”
readOnly: true
env:
— name: DOCKER_CONFIG
value: “/kaniko/secrets”
restartPolicy: OnFailure
volumes:
— name: source
emptyDir: {}
— name: docker
secret:
secretName: docker

The manifest above is a Kubernetes CronJob manifest. The core container/Job runs the Kaniko build that I mentioned in my first post.

Build Steps

Anything that you need to do to prepare your build, like you know, pull the source, you can do it within init-containers.

For init-containers, I use the Google Container Builder community build steps.

So yes, you get the idea, the steps within a _Build_ are implemented as init-containers. This is quite clever IMHO :)

Taking it for a Spin

First the little no so great thing about this, you need to store your Docker Hub config.json into a Kubernetes secret ( which of course your configured to encrypt at rest).


kubectl create secret generic docker --from-file $HOME/.docker/config.json

Create the CronJob


kubectl create -f cronjob.yaml

After 1 minute, a job will appear which will create a Pod, which will start executing each Step in your build. aka, running the init-containers of your Pod.


$ kubectl get pods
NAME READY STATUS RESTARTS AGE
build-1532357760–4vtd2 0/1 Init:0/1 0 2m

And then the main step where kaniko is run will build your image, using the Dockerfile that you have in your GitHub repo and push to your Docker hub account.

$ kubectl logs -f build-1532357760–4vtd2 -c kaniko
time=”2018–07–23T02:51:10Z” level=info msg=”Unpacking filesystem of debian:stable-slim…”
time=”2018–07–23T02:51:11Z” level=info msg=”Mounted directories: [/kaniko /var/run /proc /dev /dev/pts /sys /sys/fs/cgroup /sys/fs/cgroup/systemd /sys/fs/cgroup/net_cls /sys/fs/cgroup/hugetlb /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/devices /sys/fs/cgroup/perf_event /sys/fs/cgroup/pids /sys/fs/cgroup/memory /sys/fs/cgroup/cpuset /sys/fs/cgroup/freezer /sys/fs/cgroup/blkio /dev/mqueue /workspace /kaniko/secrets /dev/termination-log /etc/resolv.conf /etc/hostname /etc/hosts /dev/shm /var/run/secrets/kubernetes.io/serviceaccount /proc/asound /proc/bus /proc/fs /proc/irq /proc/sys /proc/sysrq-trigger /proc/kcore /proc/timer_list /proc/scsi /sys/firmware]”
time=”2018–07–23T02:51:11Z” level=info msg=”Unpacking layer: 0"
time=”2018–07–23T02:51:13Z” level=info msg=”Not adding /dev because it is whitelisted”
time=”2018–07–23T02:51:13Z” level=info msg=”Not adding /etc/hostname because it is whitelisted”
time=”2018–07–23T02:51:13Z” level=info msg=”Not adding /etc/resolv.conf because it is whitelisted”
time=”2018–07–23T02:51:17Z” level=info msg=”Not adding /proc because it is whitelisted”
time=”2018–07–23T02:51:18Z” level=info msg=”Not adding /sys because it is whitelisted”
time=”2018–07–23T02:51:30Z” level=info msg=”Not adding /var/run because it is whitelisted”
time=”2018–07–23T02:51:30Z” level=info msg=”Taking snapshot of full filesystem…”
time=”2018–07–23T02:51:33Z” level=info msg=”cmd: Add [foo.py]”
time=”2018–07–23T02:51:33Z” level=info msg=”dest: /foo.py”
time=”2018–07–23T02:51:33Z” level=info msg=”cmd: copy [foo.py]”
time=”2018–07–23T02:51:33Z” level=info msg=”dest: /foo.py”
time=”2018–07–23T02:51:33Z” level=info msg=”Copying file /workspace/foo.py to /foo.py”
time=”2018–07–23T02:51:33Z” level=info msg=”Taking snapshot of files [/foo.py]…”
2018/07/23 02:51:34 mounted blob: sha256:7d1463d31d7e5ad679ea175cd72afede858628ca49598925a17410e452d5ccec
2018/07/23 02:51:35 pushed blob sha256:cc196c02d4dee62c087e203c30375d89b7dbc7f0ccc60b0bbe1df54b0c3a93df
2018/07/23 02:51:36 pushed blob sha256:66df180bea0e5ac767d80718307f07ecac1783fa9361fcd25da45d5261ef08d8
index.docker.io/runseb/barbaz:latest: digest: sha256:07ace67d9f5d7179b7208bb765093e3017d6bac30df15a1150c638e512c1301f size: 589

Now imagine if this could be streamlined a bit, maybe a better primitive… if you could use a Webhook if you wanted to …. Stay tuned for Part III :)