Continuous Delivery in Google Cloud Platform — Cloud Build with Compute Engine

This article is the 2nd part of a series written to tackle Continuous Delivery in Google Cloud Platform. In the 1st article, Google App Engine was the main actor. In this one, Google Compute Engine — or GCE— will come into the scene. GCE is the Infrastructure as a Service component of GCP, which is built on Google’s global infrastructure and allows its users to launch virtual machines on demand.

As we had already seen, GAE fits perfectly for continuously delivering projects. However, there are some limitations to use the platform: the code must be written in specific languages/versions, mainly if your team aims to use the Standard Environment, which is less expensive (more info at; and surely will be fully compatible if the project was designed to run on GAE, but may have troubles if trying to migrate legacy code to the platform, even if choosing the Flexible Environment.

After a pros and cons analysis, your team may conclude that GAE is not the best option for a project. Maybe because the project uses some language or tool that is not supported by the platform; maybe because they want more control or customizability over the execution environment; maybe because they want just to migrate their workload from existing servers to GCP, to take immediate advantage of scaling up the app without changing so much the codebase. For all of these cases, consider Google Compute Engine.

What we will see in the next lines is how to use a set of tools available in GCP that allows your team to set up a “development + (automated) build +(continuous) delivery” pipeline using GCE. Also, using an example Angular app, it will be possible to compare the solution with the one proposed to use with GAE in the 1st article. The key point here is that the application must be shipped as a Docker image. If you are able to do it in your real project, you have great chances to automate the whole deployment process as described in this article.

Before proceeding, make sure you have created a GCP Project and installed Google Cloud SDK in your machine if you wish to run the examples. Don’t forget to run gcloud auth login and gcloud config set project <your-project-id> before using other gcloud CLI tools.

For the sake of simplicity, the Angular app will be served by Nginx. So, create your new Angular app (follow the steps described here) and cd <app-name>. In order to pack the app plus the Nginx server as a Docker image, let’s create a file named Dockerfile in its root folder, with the following content:

Basically, it’s a multi-staged Docker build. The lines from 1 to 5 (stage 1) use a NodeJS 8 image just to build the app. The lines from 7 to 11(stage 2) copy the result of the build process — a set of HTML, CSS and JS files — to a Nginx image and replace the default server’s home page with app’s content. Finally, line 12 starts the server.

Let’s also create a second file named.dockerignore in the same folder, with the following content. This file only says to Docker that the files under dist and node_modules folders shouldn’t be copied to the final image, decreasing its size.

If you trigger a docker build -t <app-name> . command followed by docker run -d --name <app-name>-container -p 80:80 <app-name>, and point your browser to http://localhost:4200, you’ll see the application running with Docker.

What we have seen so far is just Angular and Docker setup stuff. From now on GCP will be in action, making things more interesting — trust me!

In typical non-automated or semi-automated deployment scenario, an Engineer could (1) set up a CI tool such as Jenkins to monitor a Git repository for new pushes, (2) trigger the build process when new content is pushed and (3) find a mystic way to update all VMs that are running the Docker containers to update the new version. Steps 1 and 2 are simple. Step 3 maybe not... At this point GCPs offers a much more sophisticated approach: instead of creating VMs, installing Docker on them and manually managing the containers, what about creating VMs that exclusively run a specific Docker container and automatically update themselves when new versions of the images are published? Sounds good, right? So, let’s see how to do it.

First of all, we need to add one more small file to the project root folder, named cloudbuild.yaml, as follows:

This file will be used by Cloud Build to build the Docker image and create a repository to store it in the Container Registry, using the specified name — at build time, Cloud Build automatically replaces $PROJECT_ID with your project ID.

Ops… a new GCP component was mentioned in the previous paragraph: Container Registry. According to the official documentation, it’s more than a private Docker repository: is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control.

Coming back to the code: after adding the file, run gcloud builds submit --config cloudbuild.yaml . in project’s root folder, wait a little bit, and access<your-project-id> after the building has finished. You’ll see the new repository there (similar to the picture below). This manual step is required only once. And you may copy the full repository name, we will need it later ;).

Docker image in Container Registry — Google Cloud Platform

We're almost done with Docker image building. Just need set up a trigger to start the build process automatically every time new code is published to a monitored Git repository, and this is a Source Repositories job, as we saw in the first article of this series. It will be set up exactly as it was for delivery with GAE. Keep in mind the main difference will be the result of Cloud Build processing: for GAE, it publishes a new version of the app; for the current process, it only pushes an image in the Container Registry.

Now, let’s create some VMs to run the containers. In GCP Navigation Menu, select Compute Engine > Instance Templates. Click on CREATE INSTANCE TEMPLATE. In the next screen, check Deploy a container image to this VM Instance, type (or paste) the full Container image—aka Repository — name, check Allow HTTP traffic and click on Create.

In the Navigation Menu, select Compute Engine > Instance Groups. Click on Create instance group. In the next screen, select theInstance template that you have just created and set the Minimum number of instances to “3”. Click on Create.

After the Instance Group has been created, select Compute Engine > VM Instances in the Navigation menu. Click on instances’ External IP links to make sure the application is running in each of them. It may take a while to return the home page for the first time…

Finally, grant the Compute Instance Admin (beta)
Service Account User roles to the <your-project-number> Service Account (Navigation menu > IAM & admin > IAM).

So, what if we push a new version of the application to the Git repository? Which steps do we need to repeat in order to have it running in production? Just restart the VMs. Really! There are many ways to do it in GCP, including publishing a message to some PubSub topic to invoke a Cloud Function that restarts the VMs, or use advanced options for update managed instance groups ( But, to show a simple example, let’s just add the following lines to cloudbuild.yaml:

They will include a new step in our build process that consists of running the gcloud compute instance-groups managed rolling-action restart command. The instances that belong to the group that we created some steps before will be restarted after the build, one by one, in order to ensure there will always be available machines at a given time. It may take some minutes depending on how many instances the group has, but works perfectly. To see it in action, change something in your code, the title in app.component.ts for example, and push the new code to a Git repository that is being monitored by Cloud Build. Wait a few minutes and refresh the html pages served by each instance (external IPs may change after restart).

Well, that’s pretty much it for this topic. As demonstrated here, setting up a CI environment to GCE is usually more complex than to GAE, but is a valid option if your project requirements don’t fit GAE. And sure, this is one solution, but there are others.

The last picture shows the main GCP components mentioned in the article:

Sample code is available at Github: Feel free to get it and play.

Hope it helps!