Continuous Delivery in a microservice infrastructure with Google Container Engine, Docker and Travis

Jacopo Daeli
Google Cloud - Community
5 min readJan 13, 2017

Nowadays Continuous Integration (CI) has become a de facto standard for modern tech companies and startups. There are several options of how you can setup your perfect CI environment, from using your own Jenkins cluster to third-party services like Travis or CircleCI.
Continuous delivery (CD) goes one step further by ensuring every build can also be deployed into a production-like environment (or staging) and then pass integration tests in that environment.

Introduction

Before you start setting up your CD pipeline, there are few things you should think about: needs, time, effort and money. Today I want to show you how I’ve recently built the pipeline for a project I am currently working on using Docker, Google Container Engine (GKE), GitHub and Travis. If you need something simple, functional and effective, you don’t have a lot of time, but you have a bit of money 💰 you should definitely consider this design.

Travis integrates very well with GitHub. It runs tests in isolation. Since it is a managed service, you do not have to manage your own infrastructure. GKE is a cluster manager and orchestration system for running your Docker containers, powered by Kubernetes (K8S). GKE is also a managed service in Google Cloud Platform (GCP). GCP also offers a fully managed Docker private registry, called Google Container Registry (GCR), which makes easy to store and access your private Docker images. If you do not want to use GitHub, Travis and GKE, you can easily adapt what is said in this article to make it working with other managed services or your own self-hosted solution.

About this article

Briefly, this article explains how Travis can possibly test and deploy applications as a Docker containers on Google Container Engine. It refers to projects having their code to deploy on GitHub private repositories, therefore a Travis Pro account (plans start at 69 USD) is needed.

The concept

The pipeline has four environments: development, testing, staging and production. Development refers to the developer local, testing is the CI environment (TravisCI in our use case), staging is where I first deploy new features that require to be manually tested before going to production. Both staging and production are each built on top of a Kubernetes cluster. I also use a simplified version of git-flow: features, development, production and hotfixes branches only. No release branches.

The idea is simple. For new features, you always directly work on features branches. You frequently commit and push your code on the git server. Every time you push on GitHub, Travis will test the new code. If the tests pass, you’ll be able to open and merge (if no conflicts are found) a pull request to development. This time, Travis will run the tests on the merged code, and if the tests pass, it will also build, push and deploy the Docker image in the K8S staging cluster. Deploying to production, it only requires to open and merge a pull request from development to production branch. This will behave as when you merge a pull request from a feature branch to development. Very similar is for hotfixes. In this case you can also directly open a pull request from your hotfix branch to master. With this design, the development branch contains the code deployed in the staging environment, while the master, the one deployed in production. Depending on your needs, you can design your pipeline without development branch. But this is definitely a topic for another article.

Google Cloud Platform setup

If you haven’t done it before, install gcloud SDK on your machine. Create two project using the GCP console, one for staging and one for production. You can use the same project for both staging and production, but I encourage you to keep them separate.

GCP Console — Create a new project

After the projects have been created, create one Kubernetes cluster and one service account key in JSON format for each project. The next step is to encode the JSON keys in base64 format. To do so, on Linux or OS X, type:

base64 <your-service-account.json>

Travis setup

On the Travis website, go on your user or organisation profile clicking your user name in the upper-right corner and flick the repository you want to deploy as shown in the image below.

travis-ci.com/profile

Environment Variables

While most of the environment variables can be set directly in the .travis.yml file, secret envars need be set via the Travis project setting interface. Create two new envars in your Travis project with the name “GCLOUD_SERVICE_KEY_STG” and “GCLOUD_SERVICE_KEY_PRD” using the previously generated service account base64 encoded JSON keys as values.

The .travis.yml file

The next step is to add the .travis.yml to your repo. The following .travis.yml fits for a very basic Node.js app. The first 44 lines of code provisions the test environment and run the tests. More specifically Travis will build the test environment in a container-based infrastructure (fast boot time, sudo commands are not available), install Node.js and docker. It will also install gcloud (GCP command line) and kubectl (Kubernetes command line) if the tests pass. With Travis you can do much more, for more information read their documentation.

The rest of the file specifies what to do if the tests pass and the current branch is development or master: it will run the deploy-{env}.sh file.

The deploy-{env}.sh files

The deployment scripts consist of building the Docker image using the pushed code, authenticating with GCP, pushing the newly created Docker image out to the registry, then updating the K8S deployment to use the new image.

Other considerations

In this article the code is tested in the Travis Node.js runtime, and once the tests pass, a Docker image is built, pushed out to the registry and deployed. In this way I can add third-party dependencies like databases, message brokers, and so forth in the runtime, using the .travis.yml before_install instruction for example, and easily connect my application/service to them. Another option is to build the Docker image first and push it to GCR only if the tests pass (in this case Travis will test the container and not purely the code). With the second option you should use the third-party dependencies as a container and connect your application container to them using Docker links, and use Docker Compose to configure and bootstrap all the containers.
In a microservices based scenario, you can potentially use a combination of both options, but again, this is a subject for another article.

--

--

Jacopo Daeli
Google Cloud - Community

I’m a Computer Scientist, Software Engineer and Hacker, passionate about web technologies with a vocation for writing beautiful and clean code.