Jenkins, Kubernetes, and Hashicorp Vault

Alister West
Hootsuite Engineering
7 min readAug 16, 2018

At Hootsuite we are moving towards having the majority of our services on Kubernetes, and this includes our CI/CD pipelines. Our goal was to use Jenkins, Kubernetes, and Vault to create a CI/CD system that was secure, portable, and scalable.

Figuring out how Jenkins and Kubernetes work together

Going into this, we knew two things: First we wanted Jenkins to be our CI/CD tool, and second we wanted to take advantage of Kubernetes to schedule our jobs. To link the two together we decided to use the jenkins-kubernetes-plugin. This plugin allows the Jenkins Masters to make Kubernetes API calls to schedule builds inside their own Kubernetes Pods.

This provides the following benefits:

  • Isolation: Builds being in their own Pod means it can’t affect other builds. Each Pod as per the Kubernetes documentation is a logical-host.
  • Ephemeral: Pods do a fantastic job of cleaning up after themselves. Pods by its nature are ephemeral. So unless we explicitly want to keep changes the Pod makes in its lifetime, everything will be erased. No more conflicting workspaces!
  • Build Dependencies: Related to isolation, but with Pods each job can define exactly what their build needs. Say a build pipeline has several stages: one for building the JavaScript frontend, and another for building the Go backend. Each of those stages can have its own container, simply by pulling the necessary image for each stage.

Below is a pipeline taken from the plugin repo which demonstrate the three benefits. We see that there is a unique Pod being made, this Pod will then have any of its state wiped when the build completes, and most importantly it uses a container with a specific image for each build step. How lovely!

# multicontainer-jenkins-pipelinedef label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {

node(label) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B clean install'
}
}
}

stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}

Containerized Jenkins Master and Agents

Now that we have an idea of how Jenkins and Kubernetes will work together, we had to move our Jenkins Master and Agent into modern times. This meant defining two rock solid containers.

For the Jenkins Master we started off by creating an image that would contain all the plugins that it would require. The base of this plugin image was this one. The reason we separated the plugins out was to allow us to update plugins independently from the Jenkins Master. Our plugin base image ended up looking something like this:

# plugins.DockerfileFROM jenkins/jenkins:lts# Install Jenkins plugins
RUN /usr/local/bin/install-plugins.sh \
super-cool-plugin-1:latest \
super-cool-plugin-2:latest \
super-cool-plugin-3:latest \
# ...
super-cool-plugin-n:latest

From there we could then build our Jenkins Master image on top of it. One of the nice things about the official Jenkins image is that it offers a lot of flexibility. We took advantage of this by copying in configuration overrides, initial groovy startup scripts, and other files to ensure our Jenkins Master would start up configured and ready to go.

└── usr
└── share
└── jenkins
└── ref
├── config.xml.override
├── github-plugin-configuration.xml
├── init.groovy.d // All groovy scripts here run on start
│ └── credentials.groovy
├── org.codefirst.SimpleThemeDecorator.xml
├── secrets
│ ├── README.md
│ └── slave-to-master-security-kill-switch
└── userContent
└── jenkins-material-theme.css

As for our Jenkins Agent, we used this image as the base. The benefit here being that the Kubernetes plugin that we chose was developed and tested with this image in mind. As such the communication between our Master and Agents is well supported.

Once we’ve decided how the images would look, we set up a Jenkins job that would regularly build and push our Master and Agent images to our registry. This way we would always be up to date and avoid having outdated versions of Jenkins and its plugins.

Using Vault to handle our CI/CD secrets

With our images set up, the next step was figuring out if we could #BuildABetterWay to manage our secrets in Jenkins. For this we turned to Vault. *

This choice was made for the following reasons:

  1. Provides a single source of truth; previously our secrets would get sprawled across our Jenkins Masters and became difficult to manage
  2. We are already using Vault extensively at Hootsuite, so we have lots of support and knowledge
  3. Vault supports a Kubernetes Authentication method (More on this below)

The first two reasons are self explanatory, but the third was where things got interesting and also the main focus of my contributions.

Previously we were using AppRole Authentication and while it worked well, it meant we had a secret_id and role_id that we had to manage. Ideally what we wanted was a way for our Pods to tell Vault that it belonged to a certain Kubernetes cluster and should be granted certain access. This is where Kubernetes Authentication comes in.

I’ve outlined the steps for our Kubernetes Cluster to authenticate with Vault:

  1. Before anything happens, we setup the Vault and Kubernetes relationship by giving Vault some information about our cluster:
    (The cluster’s CA cert, The host of our Kubernetes cluster, A Vault policy, A Vault role that is mapped to our Kubernetes namespace/serviceaccount).
    With that completed, Vault now knows which Kubernetes cluster to respond to and which ServiceAccount in the cluster is allowed to authenticate against the Vault Role.
  2. When we define the Jenkins Master Pod, we add a field that attaches a ServiceAccount to that Pod. This ServiceAccount is referenced when the Pod starts up and is used to retrieve the account’s JWT.
  3. Once the JWT is retrieved it is sent over to Vault which then forwards it to Kubernetes.
  4. Vault will then receive a response from Kubernetes that says the JWT came from the correct namespace and is actually the Service Account it claims to be.
  5. Once Vault gets confirmation it knows that the Pod has the right ServiceAccount which means it is mapped to a Vault role, and so Vault gives back a VAULT_TOKEN that the Pod can then use.
The Five steps visualised

What’s great is that there is no secret that has to be managed and the Pod only needs to use the Kubernetes API. So from the Pod’s perspective in the startup script of a container it would do something like:

# Get the bearer token that gets mounted into all Pods for use in making k8s API calls
KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)

# Retrieve the name of the secret associated with a ServiceAccount
JENKINS_JWT_SECRET=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/jenkins/serviceaccounts/jenkins | jq -r .secrets[0].name)

# Get the JWT that is stored inside that secret
JENKINS_JWT_64=$(curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/jenkins/secrets/$JENKINS_JWT_SECRET | jq -r .data.token)

# Converts base64 to base64url (So that it's compatible with Vault)
JENKINS_JWT_64URL=$(echo -e "import base64\nprint(base64.urlsafe_b64decode(\"$JENKINS_JWT_64\").decode('utf-8'))" | python)

# Authenticate against a Vault server with the JWT
VAULT_LOGIN_RESULT=$(vault write -format json auth/kubernetes/login role=jenkins jwt=$JENKINS_JWT_64URL)

And with the Vault token inside VAULT_LOGIN_RESULT it can use that for subsequent calls to Vault.

At this point you might be wondering how we can go from Vault Secrets to Jenkins Credentials. This is where the initial groovy startup scripts come in. On start up our entrypoint script reads in Jenkins related secrets from Vault and writes these values as JSON objects into a temporary file. This temporary file gets read by the startup script and converts those values into Jenkins Credentials.

So at a Vault path containing Jenkins Credentials, we would keep something like:

{
"type":"username-password",
"scope":"GLOBAL",
"description":"API token for Github",
"username": "my-github-username",
"password": "my-github-token"
}

Which on Jenkins Master gets converted to

A Hashicorp Vault Secret converted to a Jenkins Credential

For information on how to programmatically add credentials check here.

With all that done we now have a way to securely retrieve CI/CD secrets from Vault and a way to convert them to Jenkins Credentials if needed.

* For those wondering why we didn’t use something like the Jenkins Vault Plugin, it was lacking in a few areas:

  1. It did not support Kubernetes Authentication
  2. The secrets could not be used to make Jenkins Credentials which other plugins can use
  3. It would mean adding initial setup scripts to our Jenkinsfiles

Summary

The steps in this post describe how you can improve your Jenkins CI/CD pipeline with Docker, Kubernetes, and Vault. Revisiting our goals (secure, portable, and scalable), we added security by letting Vault handle our CI/CD secrets. We used Docker for portable self contained Jenkins Master and Agent containers. Then, with Kubernetes orchestrating these containers we are able to handle dynamic workloads with dynamic scaling.

About the Author

David Jung is a co-op Student on the Production, Operations, and Delivery team. He is currently studying Computer Engineering at The University of British Columbia. Connect with him on LinkedIn.

--

--