Photo by chuttersnap on Unsplash

CI/CD with Less Fluff & More Awesome

Using Keel, Cloud Builder, PubSub, Helm, GitHub, and Kubernetes

Rik Nauta
Donna Legal
Published in
5 min readFeb 12, 2018

--

We reached a point in the development of our application Donna that deploying versions by hand was starting to become a bottleneck (YAY šŸŽ‰). So this weekend I decided to put into place a Continuous Deployment (CD) pipeline. Our app is deployed on Kubernetes (K8S ā€” GKE) but I am sure that parts of this guide can be useful outside of that as well. Letā€™s start with some constraints:

I hate typical Jenkins or Spinnaker kind of systems. They feel bloated, sluggish and not well adjusted for the needs of a quickly iterating startup. I donā€™t need rolling canary, A/B testing, red-blue-green deploymentsā€¦I just need the latest working version running ASAP.

At the same time I wasnā€™t fully comfortable relying on third party services like Wercker, Codeship etc. Our customers are extremely security focussed so we closed down our infrastructure from outside access for a reason. I donā€™t want a ā€œdeploy containerā€ API endpoint open to public.

So in short I wanted something nimble, ā€œon-premā€ and to integrate nicely with the other tools we already loved; K8S and Helm. Letā€™s get started!

Step 1 ā€” Building & Tagging

The first step in any CI/CD pipeline is to actually build the applications that you want to deploy from source. We already had great multistage dockerfiles. And since we love Google Cloud choosing Container Builder seemed like a great option. Simply link to GitHub, setup build triggers and youā€™re done!

(Actually we ended up using cloudbuild.yaml configs anyways since we wanted to do some smart-caching & testing but it really is that easy to startā€¦and freemium!)

I set up two triggers, one that looks for pushes on the master branch and produces a Docker image tagged as gcr.io/project-id/$REPO_NAME:latest . The other trigger looks for pushes to the *. tag and produces a Docker image tagged as gcr.io/project-id/$REPO_NAME:$TAG_NAME.

So by merging PRā€™s on GitHub into master we automatically get a new latest image and by creating a new release-tag we get a new 1.1.4 image. Not bad for 10 minutes of configuring šŸ¤Æ

Itā€™s worth mentioning that all our master branches are protected on GitHub. So only approved Pull Requests can make it into master.

Step 2ā€” Helm Deployments

The second step is to make sure you have Helm charts configured and installed for your applications. Donā€™t worry about the auto-deploying bit yet, weā€™ll get to that later. We already had this but hereā€™s a quick overview.

We have two environments, staging and production. They are completely separate K8S workspaces with additional NetworkPolicies and firewall to completely separate any interaction.

In each of these environments we have a Helm-chart installed of our application. The staging environment uses the latest image tag for itā€™s images and the production environment uses a specific release tag, like 1.1.4.

Thatā€™s all we need for now.

Step 3 ā€” Installing Keel

After searching the interwebz for a few hours I decided to use Keel. Itā€™s nice and minimal whilst providing all the remaining puzzle pieces I needed. And itā€™s written in Goā€¦always a plus point šŸ˜ Kontinuous also looked great but I preferred to use as many non self-hosted components which is why we used Googleā€™s Container Builder instead of relying on Kontinuous for that as well.

Keel comes with a Helm chart so installing is easy, but thereā€™s a few caveats.

First of all, make sure to install the Helm-chart from the official repo https://github.com/keel-hq/keel not the https://github.com/kubernetes/charts one, itā€™s out-of-date. Thereā€™s an open issue about syncing upstream but just make sure to check first.

Second of all, if you use Googleā€™s recommendations and NOT give your Kubernetes pool any default permissions you will need to setup Keel to use a ServiceAccount. Itā€™s only a small adjustment to the deployment.yaml file in the chart, something like this:

To install the chart make sure to pass the following values:

Next make sure to configure the Keel-service account with PubSub edit permissions. Depending if you already have Cloud Builder push notifications to PubSub you might also have to add PubSub-publish permissions to the @ cloudbuild IAM role as explained here https://github.com/keel-hq/keel/issues/132

Step 4ā€” šŸš€šŸ’„ā¤ļø

Keel is running in our infrastructure, so now for the actual cool bitā€¦continuous delivery in 3lines of code!

The only thing we need to add now is a few values when upgrading our application Helm-charts. Letā€™s start with the staging one. I want our staging environment to always run the latest master and notify me on Slack when it has deployed a new version. So the following values are all I need to add:

Keel will now monitor PubSub for notifications of new container images and automatically force an update. Itā€™s super fast, usually I receive a Slack notification the moment CloudBuilder is done pushing the image.

For the production environment I wanted a bit more control for now. I only want to run images that have a release-tag associated with them. And I want to formally approve a deployment before pushing it live. Just in case I need to do some schema migrations etc. But even that is super easy in Keelā€¦

Now Keel will watch for any new image that matches the major-version of the already deployed Chart. Whenever it finds one it will post a notification in Slack and by replying with a `keel approve` or `keel reject` in Slack I can deploy (or not deploy) the new image.

<drops the šŸŽ¤>

Conclusion

Sorry for my post always getting super technical, but I hope that they will help at least someone out there. If you have any questions please comment as Iā€™m happy to show/explain more. And if you found this usefulā€¦share some claps :-)

--

--

Rik Nauta
Donna Legal

Co-founder and CEO of https://www.donna.legal Excited as a puppy about anything techā€¦or AIā€¦or Legoā€¦or PUPPIES!