Parameterised Kubernetes deployments without Helm via Jinja and GCP Cloud Build

Ivan N.
Datasparq Technology
4 min readMar 12, 2020

… when you want Helm without Helm

TLDR:
We wanted parameterised Kubernetes deployments but Helm was too complicated to integrate with our CI/CD. So we solved the problem with Jinja Templates and a python script running in Cloud Build.

Code is available as a Github gist: here

The Problem

We have a product called Houston that simplifies workflows. It has an API, a web interface, and a couple of other components that all live in Kubernetes. We wanted to have a continuous integration/deployment where testing, staging and production were all running on the same cluster but in different namespaces. And because we love GCP, we wanted to keep it simple and use Cloud Build, as it supports K8 straight out of the box.

In essence this is what we were aiming for:

Git -> Cloud Build -> Kubernetes
Simplified CI/CD diagram: Different branches deploy to different environments, (master gets deployed on production, develop on testing)

The basic requirements were along the lines of:

  • Different ConfigMap values for each namespace
  • Different container versions for each namespace
  • The ability to add logic such as: if $ENV != 'prod' callhouston.io else some.testing.server.elsewhere.com
  • The ability to ignore a deployment based on the $ENV variable. (e.g. not deploy the web app if we are only testing the API)

Why Helm did NOT work:

There several reasons why Helm was making things more complicated:

  • It required a Tiller server. We went down the path of something called tillerless helm, but it just added more complexity.
  • It could very well have been just me, but I couldn’t find a helm command that would both install a new chart or upgrade an existing one with the same command. helm upgrade --install was great for upgrading but not fresh installs, at least with the version we were using at the time.
  • Helm charts felt too complex and were a pain in the ass to write and debug. We also couldn’t find an easy way to parameterise charts so that they would produce different results based on environmental variables.
  • On a personal note, my OCDs screamed in agony every time helm would create a new version of a deployment even if nothing was changed. After a week of high frequency testing and debugging, all my deployments were at version 42. And just like a Douglas Adams character, I stared wondering if I was asking the right questions.

[…]we could not find an easy way to parameterise charts so that they would produce different results based on environmental variables.

The Jinja Solution

Whilst struggling with the Helm charts, I noticed how similar they were to the Jinja syntax. This was my Eureka moment (minus the bathtub), and the pythonista in me jumped at the opportunity.

The python script that did all of the “heavy lifting” was about 30 lines of code including comments and it did only two things:

  • Parse all YAML files in a given directory and its sub directories
  • Read all the environment vars via os.environ and pass them as parameters to Jinja

All we needed next was to find a public docker container that comes with Jinja2, as running pip install on every build felt a bit lazy. Luckily for us, someone already had solved that: ‘pinterb/jinja2'

The reason why this ended up working so well was down to the innate way in which Cloud Build works:

  • At the beginning of every build it clones your repo at the specified branch and then runs a series of docker containers on top of it.
  • You can specify any GCP provided container, any public one from docker hub or any other container you have access to.
  • Any file changes made on your repo persist for the duration of the build.
  • Cloudbuild will substitute any custom$_VARIABLE_NAME in the build script, which you need to pass in the trigger (see image below). We use Terraform for setting those up (see example code at the bottom). The only annoying part is that when you are passing those vars to a cloud build stage(container), you need to explicitly define them all.
The Cloud Build trigger for the develop branch. You can see the variables at the bottom; there is another one, not shown here, called ‘_ENVR’ that has a value of ‘testing

Since we are running python here, the possibilities are endless. You could add any arbitrary python code before parsing the templates. One of the functionalities I’m still toying with in my head is: for every feature branch, figure out who the developer was from the git commit, and change the kubernetes namespace to their name. This way everyone gets their own playground and no-one has to experience the CI/CD equivalent of sand being kicked in your face.

Summary

  • We write the Kubernetes files as Jinja templates
  • In Cloud build we run a container that parses the templates using environmental variables and overwrites the files
  • Finally we run kubectl apply and enjoy the fruits of our labour

Disclaimer: While I am a DataSparQ employee and I am actively working on Houston, the views and opinions expressed in this work are entirely mine.

--

--

Ivan N.
Datasparq Technology

When the machines take over, I will be on the winning side 🤖