Automatic deployment of multiple Docker containers to Google Container Engine using Travis

Rodrigo Pombo
Hexacta Engineering
5 min readApr 19, 2017

We have a web app composed of three moving parts, a SPA front end, a nodejs API and a database. The app follows the structure described on a previous post. Our goal is to have the last development version always deployed to google cloud with out any effort from the developer.

You can see a working version of this set up on the hexacta/sanata repository. In fact, all the code snippets from this posts are taken from that repo.

Folder Structure

We are going to: (1) containerize our front and back end with Docker, (2) configure nginx to serve the front end and proxy requests to the back end, (3) initialize a new Google Container Engine project, (4) configure Travis to run our deployment script, (5) make Travis build and push our container images to Google Container Registry, (6) create all the needed Kubernetes resources and (7) make the deployment script update Kubernetes resources to use the new container images created by each build.

Docker

Don’t worry, you don’t even need to install docker, Travis and GKE will do all the work for us. We need to create Dockerfiles for Node and NGINX. Dockerfiles are text files that contain the commands used to assemble a particular Docker image.

For the Node image we just need to define the base image (node:alpine), copy our server files from our build folder (./server/dist) to the image, expose the port used by our app, and specify the start command:

For NGINX, we need a config file that includes a proxy to the node container for all requests starting with /api/:

Then we create the dockerfile with base image (nginx:alpine), we copy our minfied transpiled bundled files from ./web/build to the image /usr/share/nginx/html folder, and then copy the nginx config file. In this case there is no need to expose ports or specify start commands because all of that is specified on the base image.

Google Container Engine (GKE)

We need a Google Cloud project. If you are not registered yet, you can register and have access to a free tier and $300 to spend during the first year. GKE uses Kubernetes to do a lot of powerful stuff with containers, we are just going to use it to run our Docker containers.

To create a new project go to the cloud console and create a new project. Write down the project-id because we are going to use it later, in my case the project-id is sanata-prod.

Once you have your new project selected, go to Container Engine from the menu, wait for it to finish the set up, and create a container cluster. I will use sanata-cluster for the name and us-central1-b for the zone, and leave default values for the rest of the fields.

Travis

Configuring Travis to run your npm test and build scripts is very simple, so we are going to start from a .travis.yml configuration file that already includes installing the app dependencies, running tests and building the app (generating ./server/dist and ./web/build):

From there we need to add docker, install google cloud sdk, and kubectl (we will use it later), so we add:

Add docker, gcloud and kubectl to .travis.yml

We also add a deploy step using a script provider and a bunch of environment variables that we will use in the deploy script. The final .travis.yml should look like this:

Service Account Credentials

We will need to create a service account so Travis can push images to GCR (and update kubernetes deployment later). This can be done from the credentials console:

  1. Create credentials
  2. Service account key
  3. New service account
  4. Service account name: travis
  5. Roles: Storage Admin and Container Engine Developer
  6. Key type: JSON

This gives us a JSON file, we need to base64 encode it:

base64 --wrap=0 [your-credentials.json]

And then add the base64 encoded file as an environment variable in Travis settings with the name GCLOUD_SERVICE_KEY_PROD (make sure to leave “display value in build log” off).

Now we can write the deploy-prod.sh script.

Google Container Registry (GCR)

Google Container Registry is a private Docker image storage included in the Google Cloud Platform. Travis will build and push new Docker images to this registry for each deployment. We can script build and push with this script:

Kubernetes

Until now we are only pushing Docker images to the registry, in order to run them we need to create some Kubernetes pods and services. We will have one Pod for each container, one for MongoDB, one for Node, and one for NGINX. Pods are usually declared using deployments. We will also have one service for each pod, and the service for the NGINX pod will be of type LoadBalancer because it will be our app entry point and it will expose an external IP.

Our node container needs some secret environment variables that should be specified when the container is started. Kubernetes provides another resource for this, secrets. In our case there are three variables and we need to encode the values in base64:

echo -n “value” | base64

We can define all the resources together in one yaml file:

Let’s create these resources using the gcloud CLI:

$ gcloud init
$ gcloud components install kubectl
$ gcloud config set project sanata-prod
$ gcloud container clusters get-credentials sanata-cluster
$ gcloud auth application-default login
# remember to replace secret values from ./gcloud/sanata.yaml
$ kubectl create -f ./gcloud
$ kubectl proxy

We now have the app running in GKE. The last command runs a proxy to the cluster, so you should be able to see all the resources details from a user-friendly GUI at http://127.0.0.1:8001/ui.

Now that all the kubernetes resources are created we need to add one last step to the deploy-prod.sh script. We need to update the deployments every time we push new images to the registry, so the full script should now be:

If all of this sounds confusing it’s because it is, at least it is for me, but that’s probably because we are using a very powerful tool to perform a pretty simple task.

Summary

There are several improvements that need to be done to this set up: making database persistent, different deploys for testing and production, load balancing; but it may be helpful to use this as a reference while you build your own set up for the first time.

For more information on the deployment scripts check this post by Jacopo Daeli:

Thanks for reading.

I build things at Hexacta.

Have an idea or project we can help with? Write us.

--

--