Continuous Delivery in Google Cloud Platform — Cloud Build with Compute Engine

Ricardo Mendes
Google Cloud - Community
7 min readDec 13, 2018

This article is the 2nd part of a series written to tackle Continuous Delivery in the Google Cloud Platform. In the 1st article, Google App Engine was the leading actor. In this one, Google Compute Engine — or simply GCE — will come into the scene. GCE is the Infrastructure as a Service component of GCP, which is built on Google’s global infrastructure and allows its users to launch virtual machines on demand.

As we have already seen, GAE fits perfectly for continuously delivering projects. However, there are some limitations to using the platform: the code must be written in specific languages/versions, mainly if your team aims to use the Standard Environment, which is less expensive (more info at https://cloud.google.com/appengine/docs/the-appengine-environments); and surely will be fully compatible if the project was designed to run on GAE, but may have troubles if trying to migrate legacy code to the platform, even if choosing the Flexible Environment.

After a pros and cons analysis, your team may conclude that GAE is not the best option for a project. Maybe because the project uses some language or tool that is not supported by the platform; perhaps because they want more control or customizability over the execution environment; maybe because they want just to migrate their workload from existing servers to GCP to take immediate advantage of scaling up the app without changing so much the codebase. For all of these cases, consider Google Compute Engine.

What we will see in the following lines is how to use a set of tools available in GCP that allows your team to set up a development + automated build + continuous delivery pipeline using GCE. Also, by using an example Angular app, it will be possible to compare the solution with that one that uses GAE, as described in the 1st article. The key point here is that the application must be shipped as a Docker image. If you can do it in your project, you have a great chance to automate the whole deployment process as described in this article.

Before proceeding, make sure you have created a GCP Project and installed Google Cloud SDK in your machine if you wish to run the examples. Don’t forget to run gcloud auth login and gcloud config set project <your-project-id> for proper gcloud CLI usage.

For the sake of simplicity, the Angular app will be served by Nginx. So, create your new Angular app (follow the steps described here) and cd <app-name>. In order to pack the app plus the Nginx server as a Docker image, let’s create a file named Dockerfile in its root folder with the following content:

Basically, it’s a multi-stage container build. The lines from 1 to 5 (stage 1) use a NodeJS 8 image just to build the app. The lines from 7 to 11(stage 2) copy the result of the build process — a set of HTML, CSS, and JS files — to an Nginx image and replace the default server’s home page with the app’s content. Finally, line 12 starts the server.

Let’s also create a second file,.dockerignore, in the same folder, with the below content. It will prevent Docker from copying unnecessary files to the built images, decreasing their sizes.

If you trigger a docker build -t <app-name> . command followed by docker run -d --name <app-name>-container -p 80:80 <app-name>, and point your browser to http://localhost:4200, you’ll see the application running with Docker.

What we have seen so far is just Angular and Docker setup stuff. From now on GCP will be in action, making things more interesting!

In a typical non-automated or semi-automated deployment scenario, an Engineer could (1) set up a CI tool such as Jenkins to monitor a Git repository for new pushes, (2) trigger the build process when new content is pushed, and (3) find a mystic way to update all VMs that are running the Docker containers to update the new version. Steps 1 and 2 are simple. Step 3 maybe not... At this point, GCP offers a much more sophisticated approach: instead of creating VMs, installing Docker on them, and manually managing the containers, what about creating VMs that exclusively run a specific container and automatically update themselves when new versions of the images are published? Sounds good, right? So, let’s see how to do it.

First of all, we need to add one more small file to the project root folder, named cloudbuild.yaml, as follows:

This file will be used by Cloud Build to build the Docker image and create a repository to store it in the Container Registry, using the specified name — at build time, Cloud Build automatically replaces $PROJECT_ID with your project ID.

Ops… a new GCP component was mentioned in the previous paragraph: Container Registry. According to the official documentation, it’s more than a private container repository: is a single place for your team to manage container images, perform vulnerability analysis, and decide who can access what with fine-grained access control.

Coming back to the code: after adding the file, run gcloud builds submit --config cloudbuild.yaml . in the project’s root folder, wait a little bit, and access https://console.cloud.google.com/gcr/images/<your-project-id> after the building has finished. You’ll see the new repository there (similar to the picture below). This manual step is required only once. And you may copy the full repository name, we will need it later ;).

Docker image in Container Registry — Google Cloud Platform

We're almost done with Docker image building. Just need to set up a trigger that will start the build process automatically every time new code is pushed to a monitored Git repository, and this is a Source Repositories job, as we saw in the first article of this series. It will be set up exactly as it was for delivery with GAE. Bear in mind the main difference will be the result of Cloud Build processing: for GAE, it publishes a new version of the app; for the current process, it only pushes an image in the Container Registry.

Now, let’s create some VMs to run the containers. In the GCP Navigation Menu, click Compute Engine > Instance Templates. Click on CREATE INSTANCE TEMPLATE. In the next screen, select Deploy a container image to this VM Instance, type (or paste) the full Container image — aka Repository — name, select Allow HTTP traffic, and click on Create.

In the Navigation Menu, select Compute Engine > Instance Groups. Click on Create instance group. In the next screen, select the Instance template you have just created and set the Minimum number of instances to 3. Click on Create.

After the Instance Group has been created, select Compute Engine > VM Instances in the Navigation menu. Click on instances’ External IP links to make sure the application is running in each of them. It may take a while to return to the home page for the first time…

Finally, grant the Compute Instance Admin (v1) and Service Account User roles to the <your-project-number>@cloudbuild.gserviceaccount.com Service Account (Navigation menu > IAM & admin > IAM).

So, what if we push a new application version to the Git repository? Which steps do we need to repeat in order to have it running in production? Just restart the VMs! There are many ways to do it in GCP, including publishing a message to some PubSub topic to invoke a Cloud Function that restarts the VMs, or using advanced options for update managed instance groups (https://cloud.google.com/compute/docs/instance-groups/updating-managed-instance-groups). But, to show a simple example, let’s just add the following lines to cloudbuild.yaml:

They will include a new step in our build process that consists of running the gcloud compute instance-groups managed rolling-action restart command. The instances that belong to the group we created a few steps before will be restarted after the build, one by one, in a process that ensures there will always be available machines at any given time. It may take some minutes depending on how many instances the group has, but works perfectly. To see it in action, change something in your code, the title in app.component.ts for example, and push the new code to a Git repository that is monitored by Cloud Build. Wait a few minutes and refresh the HTML pages served by each instance (external IPs may change after restart).

Well, that’s pretty much it for this topic. As demonstrated here, setting up a CI environment for GCE is usually more complex than for GAE, but is a valid option if your project requirements don’t fit GAE. And sure, this is one solution, but there are others.

Below picture shows the main GCP components mentioned in the article:

Architecture for Continuous Delivery in Google Cloud Platform > Cloud Build with Compute Engine

The sample code is available on Github: https://github.com/ricardolsmendes/gcp-cloudbuild-gce-angular. Feel free to fork it and play.

Hope it helps!

This is the 2nd of a 3-article series on Continuous Delivery in Google Cloud Platform:

App Engine | Compute Engine | Kubernetes Engine

--

--