CI/CD using CircleCI and Google Kubernetes Engine (GKE)

Adam Mausenbaum
12 min readJul 11, 2018

--

Once upon a time, building and releasing web applications was easy. We used to write some code in PHP, FTP it to our favourite webhost, and then we were done. Things worked. We could be up and running in an hour.

It seems that twenty years later, building and releasing web apps has become much harder. These days we put our microservices in containers, we build complex continuous integration and delivery pipelines, and we orchestrate our containers using complex container management platforms like Kubernetes. Oh, and it’s all in the cloud (I am a Buzzword Bingo champion).

For reasons, I repeatedly have to setup such architectures. There are an insane amount of moving parts to get this all working:

  1. How do we get our application into a container?
  2. How do we automate our build and test execution (CI)?
  3. How do we publish our docker images?
  4. How do we setup Kubernetes?
  5. How do we automate deployment to Kubernetes?
  6. How do we make our apps publicly accessible?

To make life easier for you, I’ve documented how to build a complete CI/CD pipeline from scratch that solves the above questions.

For Kubernetes, I’m a big fan of Google Cloud’s product, GKE, as it’s incredibly simple to setup, is easy to keep up-to-date with new Kubernetes releases and has some nice extra control plane / management tooling (i.e. Stackdriver for log aggregation just works). We’ll also be using CircleCI as it is simple, free, and flexible.

Side note: if you’re not married to Github + Circle + Kubernetes, you should try out the latest versions of Gitlab as they offer a complete stack that includes project management (ala JIRA), version control, and CI/CD on Kubernetes(see Auto Devops) or even better just make a static site and host it on Github Pages.

In this article, we will setup a basic CI/CD architecture that has the following:

  1. A very simple containerised Node.js application
  2. Automated tests for all commits and pull requests using CircleCI (specifically the version 2.0 API)
  3. Automated deployment to Google Kubernetes Engine (GKE) for all commits to master

At the end, we should have the following pipeline setup:

TL;DR

If you just want to see how it’s done, take a look at the example repo or at these individual reference files:

Before you start

Before getting started, you should setup the following accounts:

  1. Github (including pushing the sample app below to your account)
  2. CircleCI
  3. Google Cloud Platform ($300 free credit for new signups)

A sample application

For your convenience, I’ve created a very simple Hello World application in Node.js. It contains both the before and after states. To get started, you should clone the repo and checkout the getting-started branch and install the dependencies:

git clone git@github.com:mousetree/node-circle-gke.git
git checkout getting-started
npm install

You can then run the application using npm start and then visit http://localhost:3000 to verify that it’s working. You should also run the tests using npm test.

If you’d like to follow along step-by-step, you should fork the above repository and use the getting-started branch, or just start a fresh repo on your account and use the code in that branch as a sample (tip: you can checkout that branch, delete the .git directory, and then push to your new repo).

Setting up Continuous Integration using CircleCI 2.0

Login to your CircleCI account and follow the instructions to start building your project there. In particular, the instructions will require you to create the folder and file .circleci/config.yml in your project. You can use the following configuration:

The above config does the following:

  • Starts up a container using a Node.js v10.x base image. All subsequent steps run inside that container
  • Checks out the code from Github
  • Installs our dependencies using npm install
  • Caches our dependencies (node_modules) and restores them for future installs. The cache is based on the checksum of package.json so that any changes to our package.jsonwill invalidate our cache.
  • Runs our tests using npm test

Side note: You’ll also notice a strange new section at the bottom of the file called workflows. While this is not strictly necessary at this point, it’s a new feature in the Circle 2.0 spec that will be useful when we get to the deployment part of the pipeline. At the time of writing, I also found there was a lack of good end-to-end examples for version 2.0.

Your build should have completed and look something like this:

The build should now run for every commit you make (to any branch) — including Pull Requests! You can test this by making a PR, your PR should now show you the status of the tests. You can also update your repo settings to ensure that all tests have passed before a PR can be merged.

Recap — what do we have so far? We now have a fully automated continuous integration pipeline that runs our tests on every commit and pull request.

Creating a Dockerfile

Let’s start by getting our app into a Docker container. In the root of the project, create a file named Dockerfile with the following content:

The nice thing about this Dockerfile is how it leverages Docker’s layer caching. It first copies in just the package*.json files and then runs npm install. We now have a docker layer containing just our dependencies. As these don’t change very often, our docker build will skip this step in subsequent builds. We only then copy in our code. Every time our code changes and we want to re-build, Docker will only have to copy in the new code (and not install the dependencies).

We can then run the Docker build like so:

docker build -t node-circle-gke .

If you run the above commmand again, you should notice it build almost immediately (due to above explanation). Once built, we can then run our image using:

docker run -p 3000:3000 node-circle-gke

If you visit http://localhost:3000 you should see the familiar ‘Hello World’ text.

Side note: for a more complex app, such as one that requires a database, I would typically use a docker-compose.yml file that mounts the src/ directory as a volume. This allows us to completely avoid rebuilds which is especially useful when using live reload tools such as nodemon .

Setting up Google Kubernetes Engine

We’re now ready to start thinking about deploying our application somewhere. If you’re a fan of overengineering your problem, then Kubernetes is definitely the right solution (check out now.sh if you prefer simplicity).

The three simple steps to setup Kubernetes on GCP are:

  1. Login to your Google Cloud Console
  2. Visit the Kubernetes Engine page and click ‘Create Cluster’ — use all the defaults
  3. Visit the Container Registry page and create a new registry — this is where we will push our docker images

Oh, and you’ll also want to create a Service Account. We will use the Service Account to allow CircleCI to securely interact with Kubernetes Engine and the Container Registry. You can create the service account here. You can use any name — i.e. myapp.

To ensure that the Service Account has permissions to interact with our cloud resources it should have at least the following permissions:

  • Kubernetes > Kubernetes Engine Developer
  • Storage > Storage Admin

Note: make sure you select ‘Furnish a new private key’ and choose the key type as JSON. Save this file somewhere as we’ll use it again in a minute.

Setting up Continuous Deployment to GKE using Circle CI

I’m not going to get into what Kubernetes is and how it works, there are many great resources online for that. Instead, let’s focus on deploying our application to Kubernetes using the standard objects.

Ensuring we’re actually deploying the right version

Before we start deploying our application, it’s always a good idea to have some debug information within your app that tells us which version we’re running and when it was built. Let’s take a quick detour to do set this up (and learn how Docker build args and environment variables work).

First, let’s quickly update our application code to read in two environment variables (lines 10,11) and then return them to the user (lines 15–17):

Our application now reads these environment variables, but where are they actually set? While Kubernetes can supply run-time environment variables (from secrets, config maps etc) in the deployment configuration file, in this case we want to set some build-time environment variables. Docker has a nice feature that will allow us to do this: arguments.

Arguments, as the name implies, allow us to pass in parameters to our docker build. We’ll then take those arguments and store them in environment variables which our application can then access. Let’s add the args and env vars to our Dockerfile:

ARG COMMIT_REF
ARG BUILD_DATE
ENV APP_COMMIT_REF=${COMMIT_REF} \
APP_BUILD_DATE=${BUILD_DATE}

Your Dockerfile should now look something like:

You can test this on your local machine by running something like:

docker build -t node-circle-gke --build-arg COMMIT_REF=23sdfsdf23 --build-arg BUILD_DATE=2018-07-01 .docker run -p 3000:3000 node-circle-gke

Now that that’s out the way, it’s time to get to actually deploying to Kubernetes.

How does CircleCI authenticate with Kubernetes?

Given that CircleCI needs to communicate with Kubernetes, we need to give it a secure way of authenticating. Earlier, we setup a Service Account and saved the private key as a JSON file. Let’s open that JSON file and copy the contents to the clipboard.

Next, open your Project Settings in CircleCI and navigate to the Environment Variables page. Create an environment variable called GCLOUD_SERVICE_KEY and paste in the contents of that JSON:

We’ll come back to this variable shortly — it’s used by our Google Cloud CLI tools to authenticate with the platform.

Adding the deployment step to the Circle config

And now for the main attraction — the full Circle configuration file. We’re going to add a new job called deploy_to_staging and then add it into our workflows section:

There’s a lot of new content in the above YAML file. Let’s tackle it step by step.

Setting up the Google SDK

The first thing we see in the deploy_to_staging step is that we’re using the official Google Cloud SDK Docker image. This image contains the gcloud and nearly everything else we need to communicate with our container registry and Kubernetes cluster.

The next thing we see is a few environment variables we need to set:

- PROJECT_NAME: "my-app"
- GOOGLE_PROJECT_ID: "xxx"
- GOOGLE_COMPUTE_ZONE: "europe-west3-a"
- GOOGLE_CLUSTER_NAME: "cluster-1"

These are referenced later in the file when we describe the instance we want to connect to. The project name can be anything but keep in mind it’s used as the name of nearly all our resources (pods, deployment, images) — just take a look through config.yml and k8s.yml to see where it’s referenced.

Speaking of yml files, you’ll notice that they contain template strings such as {PROJECT_NAME} . While templating isn’t natively supported by CircleCI or Kubernetes, we can achieve the same objective by doing it ourselves. We’ll touch on this again in a few minutes but for now all you need to know is that we’re installing the gettext package which has a useful tool called envsubst:

apt-get install -qq -y gettext

The next thing we do is take the contents of the GCLOUD_SERVICE_KEY environment variable we set earlier and dump it into a JSON file. We’ll then use the gcloud CLI to activate that service account using this file:

echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.jsongcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json

Next, we make sure we’re using the correct project and zone (in case we have multiple) and most importantly we get the credentials needed to communicate with the Kubernetes cluster using kubectl . The get-credentials command will download and setup the necessary kubeconfig files:

gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
gcloud --quiet container clusters get-credentials ${GOOGLE_CLUSTER_NAME}

Building and pushing our Docker image

The next piece of code is what actually builds our Docker image. It’s using the standard docker build -t my-app . syntax but has one extra addition: the user of build arguments.

As discussed above, Docker allows us to pass in arguments at build time using the ARG directive. In this case we’re passing in CIRCLE_SHA1 which contains the git commit revision number (set automatically by Circle CI but can be accomplished yourself by substituting it for git rev-parse HEAD ) and the current date. These are stored in environment variables which are then used by our application.

docker build \
--build-arg COMMIT_REF=${CIRCLE_SHA1} \
--build-arg BUILD_DATE=`date -u +”%Y-%m-%dT%H:%M:%SZ”` \
-t ${PROJECT_NAME} .

Once our image has been built, we can now push it to our remote container registry on Google Cloud. To do this we tag the image with the full path it should be stored under in the registry — we use the git commit hash as the version number rather than latest or any other manually maintained version. We then login to our remote registry using by piping the access token provided by the gcloud CLI into the docker login command. We can then push our image:

docker tag ${PROJECT_NAME} eu.gcr.io/${GOOGLE_PROJECT_ID}/${PROJECT_NAME}:${CIRCLE_SHA1}gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.iodocker push eu.gcr.io/${GOOGLE_PROJECT_ID}/${PROJECT_NAME}:${CIRCLE_SHA1}

You can visit your Container Registry > Images page on the Google Cloud Console and you should see something like:

Deploying our image to Kubernetes

We now have our image stored in the cloud. We can now get to the actual deployment.

To do this, we will create a new file in the root of our directory called k8s.yml. The file should look something like:

It’s a pretty standard Kubernetes configuration file — however, it does contain both the service and deployment objects (you might not have been aware that we can have multiple objects in a file separated by ---). I won’t go into too much detail on the contents of the file, but the important thing to know is that we will run 2 replicas of our image and create an external LoadBalancer service to route traffic into our cluster and to those 2 pods.

As you can see, the file has all the template placeholder variables we mentioned earlier. The first step is to replace those placeholders with their actual environment variables. The most important variable we’re replacing is spec.template.spec.containers[0].image which should contain the URL for our image on the registry — i.e. eu.gcr.io/my-project/my-app:a78f061c524e9357eed33f489fad900077f6cb5f

To do this we use the envsubst utility that we installed above with thegettext package (I love this tool, I used to do this manually before with loops and sed):

envsubst < ${HOME}/project/k8s.yml > ${HOME}/patched_k8s.yml

We now have a new YAML file that has all the values filled in for us, if you SSH’d into the Circle build and viewed the file (click ‘Re-run job with SSH’ button at the top right of the build page — it’s super useful), it would look something like:

We can now tell Kubernetes to ‘apply’ this YAML. The declarative nature of Kubernetes is one of my favourite parts of all this craziness. All we need to do is make sure this file is representative of our desired state (2 replicas of this specific image), and Kubernetes takes care of the rest. Magic.

kubectl apply -f ${HOME}/patched_k8s.yml

The above command will trigger the reconciliation to the new deployment config, but it just returns something like ‘Deployment applied’. If we ended our build steps here we wouldn’t know if the deployment actually succeeded. To ensure that our build fails if our deployments fail (and time how long the builds take) we add the final line:

kubectl rollout status deployment/${PROJECT_NAME}

We should now have our app running on Kubernetes. If you don’t yet know the IP address of the app, you can login to your Cloud Console and open up Kubernetes Engine > Workloads > my-app and at the bottom of the page you should have the endpoint IP listed.

Additionally, if you’d like to view the logs, its super easy: just click ‘Container Logs’ in the top section of that page (I can recall the days I spent trying to get ELK working on Azure Container Services — in vain).

One last thing about workflows

The last thing to point out is the workflows section at the bottom of the configuration file. You’ll notice that we’ve now added the deploy_to_staging job and set it to only run for commits to master. The upshot is that the tests run on every commit (i.e. branches for pull requests) while the deployment only happens when code hits master.

Conclusion

If you have a single Node.js app, or even an app + API, this is most likely overkill. You’d be better suited using vanilla Docker Compose. However, if you have dozens of applications and services this might be useful.

If you’d like to see any follow ups to this guide please let me know in the comments. Perhaps a follow-on that supports a 3-tier architecture of a React application, Node.js API and a Postgres database? Maybe something on how we can integrate end-to-end tests into this workflow?

--

--

Adam Mausenbaum

Digital. Analytics. Software. Big Data. Cloud. Buzzwords.