Deploy Jenkins to Google Kubernetes Engine with Helm

Tim Berry
6 min readJun 15, 2018

--

Please see the updated version of this post at https://timberry.dev/posts/deploy-jenkins-to-gke-with-helm/

If you followed my last post you now have a Google Kubernetes Cluster up and running in Google Cloud Platform ready to start orchestrating things for you. In this guide we’ll explore how we can interact with our new cluster and install the Jenkins CI tool to help us automate future deployments. We’ll cover a lot of ground here so I’ll link to some deep dives on the specific tools just in case you need to take a quick detour and brush up on anything.

gcloud + kubectl

kubectl is the de facto command line tool for interacting with your Kubernetes cluster. If you’re already using the gcloud command line tool then its likely you have kubectl installed, but if not you can download it here. It can be configured to control multiples clusters, each within their own “context”. So to begin, we’ll use gcloud to authenticate with Kubernetes and create a context for kubectl.

First let’s check on the cluster we created previously:

gcloud container clusters list

This should give you a list of any GKE clusters you have along with version information, status, number of nodes etc. Your output should look similar to this:

NAME                 LOCATION       MASTER_VERSION
my-first-gke-cluster europe-west1-b 1.8.10-gke.0

Our cluster looks good! Let’s now use gcloud to set up the context for kubectl. You’ll need to specify the name of your cluster and its location:

gcloud container clusters get-credentials my-first-gke-cluster --zone=europe-west1-b

gcloud will tell you that it has generated a kubeconfig entry for kubectl. We can check that it works by querying the list of running pods in our cluster:

kubectl get pods --all-namespaces

Woah! There’s a bunch of stuff running already. Tools like kube-dns, heapster and fluentd are part of the managed services running on your GKE cluster. If this is your first time using kubectl or running things on a Kubernetes cluster, I’d recommend you take a quick break and follow this tutorial on the Kubernetes site. There’s no point me refactoring a great tutorial just to create yet another Medium post :-) Instead, I’m skipping over the basics so we can concentrate on Helm and Jenkins.

Helm

Hopefully you are familiar with the concepts of deployments, services and other Kubernetes objects and how they can be declared and instantiated on Kubernetes clusters. The Helm project started as a Kubernetes sub-project to provide a way to simplify complex applications by grouping together all of their necessary components, parameterising them and packaging them up into a single Helm “chart”. This is why Helm calls itself the package manager for Kubernetes. It has now reached a certain maturity and has been accepted by the Cloud Native Computing Foundation as an incubator project in its own right.

Helm charts abstract deployments, which abstract pods, which abstract containers, which abstract applications, which run on a hypervisor, which run on an operating system, which… look, here’s a turtle, okay?

Helm charts are easy to write, but there are also curated charts available in their repo for most common applications. To get started with Helm, download and install the binary from here. There are 2 components to Helm:

  1. The helm client (called, you guessed it, helm)
  2. The helm server component, called tiller. tiller is responsible for handling requests from the helm client and interacting with the Kubernetes APIs

Before we install tiller on our cluster we will need to quickly set up a service account with a defined role for tiller to operate within. This is due to the introduction of Role Based Access Control (RBAC) — another huge subject for a different guide. But don’t panic, it’s actually very easy to set up. Create the following tiller-rbac.yaml file:

Then apply it to your cluster with:

kubectl apply -f tiller-rbac.yaml

You are now ready to set up helm and install tiller. Run the following command and you should be good to go:

helm init --service-account tiller

Wait a few minutes to allow the tiller pod to spin up, then run: helm version You should get something like this:

Client: &version.Version{SemVer:”v2.9.1", GitCommit:”20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.9.1", GitCommit:”20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:”clean”}

Okay, onto Jenkins!

Jenkins

Jenkins has been around a long time, and is essentially an automation server written in Java. It is commonly used for automating software builds and more recently can be found providing Continuous Integration services as well. In my personal opinion this tends to be because Jenkins is a “kitchen-sink”; in other words, you can pretty much do anything with it. This doesn’t mean it’s the best tool for the job, and I’m looking forward to evaluating some alternatives in future posts. One of my biggest gripes with Jenkins is that traditionally it’s been a pain to automate its own installation, as its XML config doesn’t lend itself to being managed that easily, the Puppet module for Jenkins is buggy and out of date, and managing Jenkins plugins can land you in dependency hell.

Thankfully Helm has come to the rescue with a magic chart that takes most of the pain away from you. In fact once you have helm and tiller configured for your cluster, deploying the Jenkins application — including persistent volumes, pods, services and a load balancer — is as easy as running one command:

helm install --name my-release stable/jenkins

Helm will output some helpful instructions that guide you in accessing your newly deployed application (although it may take a few minutes for everything to get up and running the first time). You should be able to access Jenkins via its external IP address, and grab the admin password following the instructions Helm gave you.

Quick note: By default this will stand up an external load balancer with a public IP. This is not very secure, and it will incur costs if you leave it running. You’re advised to delete all these resources when you’re finished with this guide.

Helm manages the lifecycle of its deployments, so you can manage your release (which in this example we called my-release) with some simple commands:

  1. helm status my-release — Outputs some useful information about the deployed release
  2. helm delete --purge my-release — Deletes the release from your cluster and purges any of its resources (services, disks etc.)
  3. helm list — Show all releases deployed to the cluster

Most Helm charts make use of a parameters file to define the attributes of a deployment, such as docker image names, resources to assign, node selectors and so forth. We didn’t specify a parameters file in the above example, so we just inherited the default one from the published Jenkins chart. Sometimes its usueful to provide your own values, and we can do that by obtaining a copy of the default file and modifying it. You can grab the values.yaml for Jenkins from here.

Have a browse through this file and you should start to see how these values map to the pods and services that are deployed as part of this chart. For demonstration purposes, we’ll just make one change here, adding an extra plugin to the InstallPlugins list (around line 80):

    - blueocean:1.5.0

Now we can upgrade our release and apply the new values:

helm upgrade -f values.yaml my-release stable/jenkins

If you quickly run kubectl get pods you will see that the old version of your release is terminating and the new one is starting up. Once the new release is deployed, the external IP should be the same but you’ll need to retrieve the new admin password.

Once you’ve logged in you can see that Jenkins has attempted to install the BlueOcean plugin that we specified. However you may also see some errors. It appears that even with a well-written Helm chart, we can’t always escape Jenkins dependency-hell…

Welcome to Jenkins. It’s 2018.

We can fix this by painstakingly going through the plugin dependencies and correcting the version numbers in our values.yaml file, then upgrading our release again. Don’t worry, I’ve done the hard work for you:

  InstallPlugins:
- workflow-multibranch:2.19
- kubernetes:1.8.4
- workflow-aggregator:2.5
- workflow-job:2.21
- credentials-binding:1.16
- git:3.9.1
- blueocean:1.6.0

One last thing: Newer version of Kubernetes enforce the use of Role Based Access Control (RBAC), so at the bottom of values.yaml make sure that you enable this:

rbac:
install: true

Once you’ve updated these values, just run the previous helm upgrade command again.

Now our Jenkins system is up and running we’ll put it to work with a custom agent for continuous deployment of the infrastructure code we built in the first place! All this, and nothing else, in the next post…

--

--