CI/CD with Less Fluff & More Awesome
Using Keel, Cloud Builder, PubSub, Helm, GitHub, and Kubernetes
We reached a point in the development of our application Donna that deploying versions by hand was starting to become a bottleneck (YAY š). So this weekend I decided to put into place a Continuous Deployment (CD) pipeline. Our app is deployed on Kubernetes (K8S ā GKE) but I am sure that parts of this guide can be useful outside of that as well. Letās start with some constraints:
I hate typical Jenkins or Spinnaker kind of systems. They feel bloated, sluggish and not well adjusted for the needs of a quickly iterating startup. I donāt need rolling canary, A/B testing, red-blue-green deploymentsā¦I just need the latest working version running ASAP.
At the same time I wasnāt fully comfortable relying on third party services like Wercker, Codeship etc. Our customers are extremely security focussed so we closed down our infrastructure from outside access for a reason. I donāt want a ādeploy containerā API endpoint open to public.
So in short I wanted something nimble, āon-premā and to integrate nicely with the other tools we already loved; K8S and Helm. Letās get started!
Step 1 ā Building & Tagging
The first step in any CI/CD pipeline is to actually build the applications that you want to deploy from source. We already had great multistage dockerfiles. And since we love Google Cloud choosing Container Builder seemed like a great option. Simply link to GitHub, setup build triggers and youāre done!
(Actually we ended up using cloudbuild.yaml configs anyways since we wanted to do some smart-caching & testing but it really is that easy to startā¦and freemium!)
I set up two triggers, one that looks for pushes on the master branch and produces a Docker image tagged as gcr.io/project-id/$REPO_NAME:latest . The other trigger looks for pushes to the *. tag and produces a Docker image tagged as gcr.io/project-id/$REPO_NAME:$TAG_NAME.
So by merging PRās on GitHub into master we automatically get a new latest image and by creating a new release-tag we get a new 1.1.4 image. Not bad for 10 minutes of configuring š¤Æ
Itās worth mentioning that all our master branches are protected on GitHub. So only approved Pull Requests can make it into master.
Step 2ā Helm Deployments
The second step is to make sure you have Helm charts configured and installed for your applications. Donāt worry about the auto-deploying bit yet, weāll get to that later. We already had this but hereās a quick overview.
We have two environments, staging and production. They are completely separate K8S workspaces with additional NetworkPolicies and firewall to completely separate any interaction.
In each of these environments we have a Helm-chart installed of our application. The staging environment uses the latest image tag for itās images and the production environment uses a specific release tag, like 1.1.4.
Thatās all we need for now.
Step 3 ā Installing Keel
After searching the interwebz for a few hours I decided to use Keel. Itās nice and minimal whilst providing all the remaining puzzle pieces I needed. And itās written in Goā¦always a plus point š Kontinuous also looked great but I preferred to use as many non self-hosted components which is why we used Googleās Container Builder instead of relying on Kontinuous for that as well.
Keel comes with a Helm chart so installing is easy, but thereās a few caveats.
First of all, make sure to install the Helm-chart from the official repo https://github.com/keel-hq/keel not the https://github.com/kubernetes/charts one, itās out-of-date. Thereās an open issue about syncing upstream but just make sure to check first.
Second of all, if you use Googleās recommendations and NOT give your Kubernetes pool any default permissions you will need to setup Keel to use a ServiceAccount. Itās only a small adjustment to the deployment.yaml file in the chart, something like this:
To install the chart make sure to pass the following values:
Next make sure to configure the Keel-service account with PubSub edit permissions. Depending if you already have Cloud Builder push notifications to PubSub you might also have to add PubSub-publish permissions to the @ cloudbuild IAM role as explained here https://github.com/keel-hq/keel/issues/132
Step 4ā šš„ā¤ļø
Keel is running in our infrastructure, so now for the actual cool bitā¦continuous delivery in 3lines of code!
The only thing we need to add now is a few values when upgrading our application Helm-charts. Letās start with the staging one. I want our staging environment to always run the latest master and notify me on Slack when it has deployed a new version. So the following values are all I need to add:
Keel will now monitor PubSub for notifications of new container images and automatically force an update. Itās super fast, usually I receive a Slack notification the moment CloudBuilder is done pushing the image.
For the production environment I wanted a bit more control for now. I only want to run images that have a release-tag associated with them. And I want to formally approve a deployment before pushing it live. Just in case I need to do some schema migrations etc. But even that is super easy in Keelā¦
Now Keel will watch for any new image that matches the major-version of the already deployed Chart. Whenever it finds one it will post a notification in Slack and by replying with a `keel approve` or `keel reject` in Slack I can deploy (or not deploy) the new image.
<drops the š¤>
Conclusion
Sorry for my post always getting super technical, but I hope that they will help at least someone out there. If you have any questions please comment as Iām happy to show/explain more. And if you found this usefulā¦share some claps :-)