Continuous Delivery Pipeline for Kubernetes using Spinnaker

VAIBHAV THAKUR
Mar 12, 2019 · 6 min read

Kubernetes is now the de-facto standard for container orchestration. With more and more organizations adopting Kubernetes, it is essential that we get our fundamental ops-infra in place before any migration. In my previous post we learnt about monitoring our workloads. This post will focus on pushing out new releases of the application to our Kubernetes cluster i.e. Continuous Delivery

Pre-requisites

  1. Running Kubernetes Cluster (GKE is used for the purpose of this blog)
  2. A Spinnaker set-up with Jenkins CI enabled.
  3. Github webhooks enabled for Jenkins jobs.

Strategy Overview

  1. Github + Jenkins : CI System to build the docker image and push to registry.
  2. Docker hub: Registry to store docker images.
  3. Spinnaker : CD System to enable automatic deployments to Staging environment and supervised deployment to Production.

Continuous Integration System

Although this post is about the CD system using Spinnaker. I want to briefly go over the CI pipeline so that the bigger picture is clear.

  1. Whenever the master branch of a git repo gets merged, a Jenkins job is triggered via Github webhook. The commit message for the master merge should includes the updated version of app and whether it is Kubernetes Deploy action or Patch action.
  2. The Jenkins job checks out the repo, builds code and builds the docker image according to Dockerfile and pushes it to Docker hub
  3. It then triggers a Spinnaker pipeline and sends a trigger.properties file as a build artifact. This properties file contains very crucial info which is consumed by Spinnaker and will be explained later in this post.

Continuous Delivery System

This is the crucial part. Spinnaker offers a ton of options for Kubernetes deployments. You can either consume manifests from GCS or S3 bucket or you can provide manifest as text in the pipeline itself.

Consuming manifests from GCS or S3 buckets includes more moving parts and since this is an introductory blog, it is beyond its scope right now. However, with that being said, I extensively use that approach and it is best in scenarios where you need to deploy a large number of micro-services running in Kubernetes because such pipelines are highly templatized and re-usable.

Today, we will deploy a sample Nginx Service which reads the app version from a pom.xml file and renders it on the browser. Application code and Dockerfile can be found here. The part where index.html is updated can be seen in the gist below (It is basically what the Jenkins job does).

The manifest for the Nginx deployment and service is below:

Steps to Set-up the Pipeline:

  1. Create a new application under the applications tab and add your name and email to it. Rest all fields can be left blank.
  2. Create a new project under Spinnaker and add your application under it. Also, you can add your staging and production kubernetes cluster under it.

3. Now, under the application section add your pipeline. Make sure the Trigger stage is set to Jenkins and you are consuming the artifacts appropriately. You can use this pipeline json. ( Don’t forget to modify it acc to your credentials and endpoints )

4. Once you add it, the pipeline will look something like this

Deep diving into the Pipeline:

  1. Configuration: This is the stage where you mention the Jenkins endpoint, the job name and expected artifact from the job. In our case trigger. properties.
  2. Deploy (Manifest): The trigger.properties file has an action variable based on which we decide wether we need to trigger a new deployment for the new image tag we can patch an existing deployment. The properties file also tells which version to deploy of patch with. It is set in the TAG variable.

3. Patch (Manifest): Similar to the Deploy stage this stage will check the same variable and if it evaluates to “PATCH”, then the current deployment will be patched. It should be noted that in both these stages the Kubernetes cluster being used a staging cluster. Therefore, our deployments/patches for staging environment will be automatic.

4. Manual Judgement: This is a very important stage. It is here where you decide whether or not you want to promote the currently running build in the staging cluster to the production cluster. This should be approved only when the staging build has been thoroughly tested by various stake holders.

5. Deploy(Manifest) and Patch(Manifest): The final stages in both paths are similar to their counterparts in pre-approval stages. The only difference being that the cluster under Account is a production kubernetes cluster.

Now you are ready to push out releases for your app. Once Triggered, the pipeline will look like this:

The sections in Grey color have been skipped because the ACTION variable did not evaluate “PATCH”. Once you deploy, you can view the current version and also the previous versions under Infrastructure Section.

Note:

  1. You can emit as much data as you feel like to the properties file and later consume it in a spinnaker pipeline.
  2. You can also trigger other Jenkins jobs from Spinnaker and then consume it’s artifacts in the subsequent stages.
  3. Spinnaker is a very powerful tool and you can perform all kinds of actions like roll-backs, scaling etc right from the console.
  4. Not only deployments but all kinds of Kubernetes resources can be managed using Spinnaker.
  5. Spinnaker provides excellent integration with Slack/Hipchat/Email for pipeline notifications.

If you found this article useful please 👏👏 . Feel free to reach out for any questions and feedback. You can find my other blogs here:

  1. Production grade Kubernetes Monitoring using Prometheus
  2. Highly Available and Scalable Elasticsearch on Kubernetes
  3. Scaling MongoDB on Kubernetes

Follow us on Twitter 🐦 and Facebook 👥 and join our Facebook Group 💬.

To join our community Slack 🗣️ and read our weekly Faun topics 🗞️, click here⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

VAIBHAV THAKUR

Written by

DevOps Engineer | USC Alumnus | Fight On! ✌🏻

Faun

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade