Building Docker Images in Kubernetes Using Kaniko

Kevin
KPMG UK Engineering
4 min readOct 15, 2021

When you think of container images the first thought is normally Docker, and this is perfectly fine outside of Kubernetes but as more companies utilise Kubernetes and can see the benefits containerisation can bring things get a little more complicated.

The traditional way and a way that you will see often in tutorials is to simply mount the docker.sock unix socket from the node in to a pod to connect to the main docker daemon and continue to use docker to build your container.

But wait… You just indirectly gave root-level host permissions to this container.

There are numerous warnings about this when reading documentation

Docker running in privileged mode provides major security implications

Doing this brings major security concerns as the pod needs to run in privileged mode meaning you have just given root-level host permissions to the container which in turn opens up your cluster to many security threats — if this is how you are currently doing it, you need to read on. More importantly however is that with the deprecation of Docker as the main runtime in Kubernetes this is going to be forced on you anyway; either way you need to read on.

Just to note, if using Azure AKS the docker runtime has already been replaced with containerd from version 1.19 upwards.

Introducing Kaniko

There are a number of different methods you can use to build images with Kubernetes, ranging from cloud solutions such as ACR tasks to using buildKit however the one I chose to go forward with is Kaniko, it belongs to google’s suite of container tools.

How does it work?

Kaniko is simply a container that runs inside your cluster, it does not need to connect to the docker daemon and it does not require privileged mode. When creating the pod you simply pass the arguments that you require which consist of the context, the docker file within the context and the location for the image at the end. Kaniko will extract the base image in the container, run all of the dockerfile lines one by one, take a layer by layer snapshot of the file system and append the snapshot layer to the base layer.

Prerequisites

Although you can use this with any Kubernetes platform or CI/CD, this article discusses the platform I used for this:

  • Terraform for IaC
  • Azure Kubernetes Service (AKS)
  • Azure Container Registry (ACR)
  • Task Group within Azure DevOps (CI/CD)

You will need the following terraform resources on your Kubernetes cluster:

Namespace, Config Maps and Secrets to Connect to ACR and Storage Accounts

Now that the kubernetes resources are set up we can create a task group in Azure DevOps that will be our Kaniko Image Builder, I will go through each of the 5 steps of this task group.

Reusable Kaniko Image Builder Task Group

1.Install Kubectl
This is to ensure your agent has the kubectl client that you will use to create your pod and monitor it — install which ever version you like.

Installation of Kubectl task

2. Upload to Storage Account
This simply zips up the working directory, then uploads it to a container of your choice in your storage account that you have added in the storage-secret as mentioned above in your terraform resources.

Please view the raw gist if the line is too long to view

3. Build Deployment Image
This needs to mount the config map and ACR secret as shown in the YAML and you’ll need to add the storage secret as an environment variable in the pod

Please view the raw gist if the line is too long to view

4. Deploy Kaniko Pod
Printing the deploy.yaml file is optional but will make it easier to see what you are deploying in your release

cat deploy.yaml
kubectl apply -f deploy.yaml

5. Monitor Kaniko
Modify this to do what you want it to do, however this below should wait for the pod to come available, print all of the logs from the container and then remove itself. Any errors will return an exit code over 0 so you can fail the release if need be.

Please view the raw gist if the line is too long to view

Conclusion

Kaniko is just one of the many ways you can approach this problem, but it works really well, is really simple to implement, is secure and integrates with all of the major cloud providers — good luck!

--

--