Building Docker Containers on a Kubernetes (K8s) Stack with Jenkins

Ross Williams
Rosco’s Blog Posts
5 min readMay 13, 2019

So, you have setup a K8s Cluster to run Jenkins to build and deploy your code dynamically rather than having static Jenkins slaves, great! Now, within your pipeline for whatever reason you require Docker functionality to perform Docker related tasks. Sounds simple right? Well you’d be surprised how difficult this can be, and within this blog post we will demonstrate the various methods and the pros and cons of each method.

This blog post assumes you already have a Jenkins master in K8s already configured with the required plugins.

Implementation 1: Share the host socket

Sharing the Docker Socker from the K8s host

The first solution to this problem is to share the host socket from K8s to Jenkins. This can be achieved by configuring the K8s pod template within the Jenkins configuration settings and using the following options;

Kubernetes Pod Template

Name: jnlp ( this can be anything but for the purposes of clarity call it jnlp)Namespace: default (this can be left blank unless you want to specify a namespace within K8s)labels: dockerSock (this is how you reference the pod template within a Jenkins pipeline)Usage: Only build jobs with label expressions matching this node

Container Template

# BE AWARE THAT IF YOU USE THE OFFICIAL JENKINS JNLP SLAVE IMAGE THE NAME OF THE CONTAINER HAS TO BE ‘jnlp’

Name: jnlpDocker Image: <reponame>/<docker_image_name> (Create a docker image from the jnlp/jenkins-slave (https://hub.docker.com/r/jenkins/jnlp-slave) and add the `docker-cli` package to the image, then reupload to docker-hub)working directory: /home/jenkins

EnvVars

Key: JENKINS_URLvalue: http://jenkins-master:8080 (this is a DNS site we setup in K8s. It is the IP address to the jenkins master)

Volumes:

Host Path Volume:  Host Path: /run/docker.sock  Mount Path: /var/run/docker.sock

The Host Path needs to be the location of the docker.sock on the kubernetes master. Use a simple find command to locate it.

Now that we have the docker socket located, you would think that’s enough, right? Wrong! If we try to run our pipeline now, we will get an error like below:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/info: dial unix /var/run/docker.sock: connect: permission denied

Why do we get this error? Containers come with a new mind set, you need to think about containers being a process. As such this process runs under a user. If we look at our socket on the host, you will get something like this:

ls -lah /var/run/docker.socksrw-rw — — 1 root docker 0 Jan 13 16:44 /var/run/docker.sock

As our container is a process you will need to run it with a user who matches the permissions on the host. Typically, this means a user is a member of the docker group. Below we have used the user root for simplicity however for a more appropriate solution, you would create a new user, and add it to the ‘docker’ group, then use the ID of that user on the host.

Now, using the ‘Raw Yaml for the Pod’ section, add the below snippet into that section, ensure that you indent runAsUser with only 2 spaces;

spec:securityContext:  runAsUser: 0

Pros

  • Build and run within one step

Cons

  • It’s very insecure. Essentially any user with the ability to run a container in the pipeline has root permissions to access the host

Implementation 2: Old School Docker in Docker

Running Docker in Docker on a Jenkins Slave in k8s

The second approach to this problem is running a second docker daemon directly on the slave. This prevents you having to expose your k8s master Docker socket to the slave, isolating the container from the hosts docker daemon. This approach is not without security concerns though. In order to run the docker daemon within a container, you must run the container running the second docker daemon, in our case our Jenkins slave, with the — privileged flag. This does a few things:

  1. Enables access to all devices on the host
  2. Sets some configuration in AppArmor / SELinux

TL;DR A container running under the — privileged flag can do almost everything that host can do.

So what does this practically look like?
Docker CLI


docker run — privileged example/Jenkins-slave-with-docker

K8s Pod Template

apiVersion: v1kind: Podmetadata:  name: jenkins-slavespec:  containers:    - name: jenkins-slave-with-docker      image: example/Jenkins-slave-with-docker      securityContext:        privileged: true

Note: To use the above yaml snippet you will require a container with the docker CLI built in, we used the Jenkins pre-built jnlp slave and bolted the docker CLI onto it, and reuploaded it to dockerhub.

Implementation 3: Use a Docker sidecar container

Docker in Docker using the Sidecar Container design pattern

The last solution to this problem, and the most secure of which (although still not optimal) is to implement a sidecar container which runs the Docker functionality on behalf of the Jenkins Slave. This completely isolates all Docker commands to a sidecar. The below we will use the same jenkins slave image with the Docker CLI bolted on, but we will add an extra container which will be used as the sidecar.

Kubernetes Pod Template

Name: jnlpNamespace: default (this can be left blank unless you want to specify a namespace within K8s)labels: dind-sidecar (this is how you reference the pod template within a Jenkins pipeline)Usage: Only build jobs with label expressions matching this node

Container Template

# BE AWARE THAT IF YOU USE THE OFFICIAL JENKINS JNLP SLAVE IMAGE THE NAME OF THE CONTAINER HAS TO BE ‘jnlp’

Name: jnlpDocker Image: <reponame>/<docker_image_name> (Create a docker image from the jnlp/jenkins-slave (https://hub.docker.com/r/jenkins/jnlp-slave) and add the `docker-cli` package to the image, then reupload to docker-hub)working directory: /home/jenkins

EnvVars

Environment VariableKey: JENKINS_URLvalue: http://jenkins-master:8080 (this is a DNS site we setup in K8s. It is the IP address to the jenkins master)Environment VariableKey: DOCKER_HOSTvalue: tcp://localhost:2375

Note: The sidecar container, dind, starts the Docker REST service on port 2375. Setting the DOCKER_HOST to tcp://localhost:2375 ensures that the Docker binary in the main container points to this Docker daemon using DOCKER_HOST environment variable.

Add Container

Here is where we specify the sidecar container for adding Docker functionality to the slave. Click the Add Container button and use the following settings:

Name: dindDocker Image: docker:18.09-dindWorking Directory: /var/lib/dockerAdvanced:  Run in privileged mode: Yes

To find the privileged checkbox Click the ‘Advanced’ Button and it will appear below.

Pro’s

  • Build and run within one step
  • isolates Docker within a sidecar rather than exposing the host to run the docker commands

Con’s

  • The sidecar container is still run in Privileged mode, but the attack vector is significantly reduced due to the commands being localised to a sidecar.

--

--