Continuous Integration GCP Cloud Build With Terraform

Raz Akbari
Geek Culture
Published in
10 min readJan 23, 2022

The Goal is to generate a releasable from source code in fast, reliable and automated manner using native GCP CI resource.

Requirements

Repository

I will use a repository stored in my Github account, it contains the source code for application to be deployed, cloud build configurations and Terraform files. You can find the repository here.

GCP Configurations

Before we begin with Terraform, there are configurations to be made manually with GCP

Enable APIs

You need to enable a couple of GCP APIs specific to this tutorial, to do so from your console dashboard go to API & Services, click on ENABLE APIS AND SERVICES button. Here you can search for the specific APIs and enable them.

  • Billing API
  • Compute Engine API
  • Cloud Build API

Service Account Permissions

The GCP service account grants permissions to Terraform for manipulating resources. Create a service account to be used by Terraform. For the sake of this tutorial it needs a set of permissions.

Let’s create a GCP IAM role with an arbitrary name like terraformCICD, and add all the necessary permissions. Eventually we assign this role to the generated service account. Here is a list of permissions to be added

  • storage.objects.list
  • storage.objects.get
  • storage.objects.create
  • storage.objects.delete
  • storage.buckets.create
  • roles/cloudbuild.builds.editor

Terraform Bucket

A GCP Cloud Storage resource where you can store your Terraform state file. state file holds information on the resources Terraform has generated.

Notice, manual changes on the resources in GCP that are handled with Terraform creates discrepancy between Terraform state file and actual infrastructure.

Connect Repository

If you have your code on Github, and you don’t want to use a webhook trigger, you need to manually connect GCP Cloud Build to your repository. If your source code is stored in Google Cloud Source or Cloud Storage, no configuration is needed here.

In case of Bitbucket Cloud or GitLab, there is the option of mirroring your repository to Google Cloud Source if you are not interested in webhook triggers. Documentation is here.

My repository is stored on Github, and I want to use a push to master branch event.

To connect your repository go to your GCP platform, and follow the steps:

  • Go to Gloud Build and then triggers. Click on manage repositories, in the new page click on create repository. You should see this:

Choosing the first option, Cloud Build will be installed on Github your account, you can limit the repositories it can pull from, and change configuration at any time.

After the connection, under Repository you see

<github owner>/<repository name>

we will use this info while working with Terraform.

Terraform Configuration

Skip if you already have Terraform configured.

Terraform relies on plugins called providers to interact with a platform like GCP. They are all developed by Terraform itself, and are publicly available in Terraform Registry.

Providers are a logical abstraction of an upstream API. They are responsible for understanding API interactions and exposing resources.

When configuring Terraform backend we define two blocks, one for Terraform itself and one for the provider, in our case Google.

Backend Configuration

Create a Terraform file with an arbitrary name like backend-config.tf.

terraform {
backend "gcs" {
bucket = "<bucket-name>"
prefix = "state"
}
required_version = ">= 0.12.7"
required_providers {
google = {
source = "hashicorp/google"
version = "3.82.0"
}
}
}
provider "google" {
project = "<gcp_project_id>"
region = "<regione_name>"
zone = "<zone_name>"
}

In terraform block we are informing Terraform to store it’s state file in the bucket we have already created in Google Cloud Storage (gcs) inside a folder called state. We are also telling Terraform, if your version is less than 0.12.7 don’t proceed, and last but not least, you need HashiCorp/google provider with version 3.32.0. Terraform Cli will automatically download the provider when it is invoked. It’s a good practice to set the version of provider.

The second block configures the provider as is obviouse.

Grant Permission

We give Terraform access to work with our GCP platform by exporting an environment variable, holding the path to our GCP service account json key.

export GOOGLE_APPLICATION_CREDENTIALS={{GCP_sa_json_key_path}}

Terraform Apply

Terraform automatically loads files with .tf extensions when applying. There are four commands to run when applying your infrastructure to the Cloud platform. At the end of this tutorial, launch these commands and you are good to go.

  • terraform init, initialises the terraform directory, downloads the provider and stores Terraform state file in GCP bucket.
  • terraform validate, validates the syntax
  • terraform plan with/without variables, a dry run on client side, does not show the lack of permissions for our defined service account.
  • terraform apply with/without variables, applies the actual infrastructure

Terraform automatically holds a lock on it’s state file while applying to ensure no one else makes changes.

Terraform Variables

Defining a variable helps you to avoid copy and paste anti pattern, it gives a single source of truth. To define a Terraform variable, create an arbitrary Terraform file like variables.tf and past the following

variable "project_id" {
type = string
description = "GCP project id"
}

We pass singular value or a group sotred in a file through command line. To have them passed through a file, create one with type .tfvars like values.tfvars and put your values with key=value format such as

project_id = “<your project name>”

When launcing terraform plan or terraform apply commands you can pass these values.

//Through Fileterraform apply -var-file="./values.tfvar" //OR Singularterraform apply -var="project_id=myprojectid"

Continuous Integration With Cloud Build

GCP has a native solution for CI called Cloud Build. Through Cloud Build we create a pipeline of steps to pull the source code, run tests and eventually build and push images to a registry, leading to a continuous integration.

At the time of writing this tutorial, opening Cloud Build page in GCP, we see four options in the navigation menu:

  • Dashboard, high level informations of your builds
  • History, a detailed list of already ran or currently running builds
  • Triggers, configurations to invoke a build
  • Settings, configure service accounts and working pool

When it comes to writing infrastructure as code, there is a basic obvious rule, all you can configure manually on the platform, can be hardcoded. In Cloud Build, triggers and settings are configurable, hence they have their corresponding configurations in Terraform provider, so let’s create them.

Triggers

As the name suggest, we invoke CI builds using triggers. Opening triggers in GCP Cloud Build, there are four sections.

  • Event, the event that triggers the CI configurations
  • Source, source code configurations
  • Configuration, specific cloud build configurations
  • Advanced

Let’s have our first simple Terraform snippet for a Cloud build trigger containing all configs mentioned above. Create a main.tf file in your repository, and paste the following, we discuss the placeholders in the snippet afterward. You can find Terraform documentation for this resource here.

resource "google_cloudbuild_trigger" "react-trigger" {  //Source section
github {
owner = "<github owner of repository added>" name = "<repository name of repository added>" //Events section
push {
branch = "<main branch name>"
//or
//tag = "production"
} }
ignored_files = [".gitignore"]
//Configuration section
// build config file
filename = "<path to cloudbuild.yaml file>"
// build config inline yaml
#build {
# step {
# name = "node"
# entrypoint = "npm"
# args = ["install"]
# }
# step{...}
# ...
# }
//Advanced section
substitutions = {
<key1>= "<value1>" <key2> = "<value2>" }}

source & events

When it comes to Cloud Build Triggers in Terraform, you need to have one of the following blocks

  • github, uses already integrated repository
  • trigger_template, uses a Google Cloud Storage repository
  • pubsub_config, uses an already integrated or Google Cloud Storage repository
  • webhook_config, configure ssh key to trigger the CI with an http post

we use the github block, under the event section we can select push or pull request either on a specific branch or with a tag. This event will trigger the build.

ignored_files and included_files

Gives you the possibility to blacklist or whitelist files when it comes to trigger a build. Both properties take a list of string file names. Adding files to ignored_files list prevents build being triggered on these files changes, hence blacklists them. Adding files to included_files triggers builds only if there is a commit on these file, hence whitelists them.

Configuration

Here we pass the actual steps of a build. These steps can be defined in a Dockerfile with or without a build config file called cloudbuild, also you can use a native cloud solution called Buildpacks without any Dockerfile or cloudbuild file.

We can also have build config steps inline inside the Cloud Build Trigger Editor.

In the example above I am using a combination of cloudbuild.yaml and my Dockerfile. There is the build block commented, to be discussed after.

We’ll check out the contents of these two files, but before, a few words on the application to be deployed. It’s a React application having a Nodejs express server in the backend. Our build steps includes:

  • Checkout the code from Git repository
  • Run npm install to install the libraries defined in package.json
  • Run npm test for unit tests in the application
  • Run npm run build to create the React build folder containing the production ready code
  • Create the docker image from the Dockerfile
  • Push the docker image to GCP Container Registry
  • Store the build log file in GCP Cloud Storage

Cloudbuild.yaml

If you check out the documentation of this build config file here, you can see the schema is something like this.

steps:
- name: string (name of publicly available image to work with)
entrypoint: string
args: [string, string, ...]

env: [string, string, ...]
dir: string
id: string
waitFor: [string, string, ...]
secretEnv: string
volumes: object(Volume)
timeout: string (Duration format)
- name: string
...

It’s a combination of build steps, each step specifying an action you want to perform with options. For each step Cloud Build creates a docker container, it comes with publicly available images to work with. If you want to use one of these publically available images like node, you add them after the name keyword.

We use the entrypoint to specify the tool we want to work with. the node image comes with npm and yarn preinstalled.

Eventually we use args to invoke our desired command.

Here is our file, it’s simple and self explanatory.

steps:- name: node  entrypoint: npm  args: ["install"]- name: node  entrypoint: npm  args: ["test"]- name: node  entrypoint: npm  args: ["run", "build"]- name: "gcr.io/cloud-builders/docker"  args:  ["build", "-t", "eu.gcr.io/$PROJECT_ID/quickstart-image:$COMMIT_SHA", "."]- name: "gcr.io/cloud-builders/docker"  args: ["push", "eu.gcr.io/$PROJECT_ID/quickstart-image:$COMMIT_SHA"]logsBucket: "gs://<bucket name>"

There are three points to consider

  • $PROJECT_ID and $COMMIT_SHA are automatically substituted with correct values during the build. They are called default substitution, you can find a list of variables available here. You can also define your custom substitution variables, it’s already in our simple Terraform file, we talk about it later on.
  • logsBucket lets you log the build events, the files are automatically stored in the bucket after each run.
  • Container Registry, docker images will be stored in GCP Container Registry, in our main.tf file we create it like this:
resource "google_container_registry" "registry" {  project  = var.project_id  location = "EU"}

The project_id is our own defined Terraform variable.

Inline Build Yaml

Instead of having a cloudbuild.yaml file, Terraform Cloud Build Trigger let’s you define your config build steps as inline yaml. As an example:

 build {
step {
name = "node"
entrypoint = "npm"
args = ["install"]
}
}

Dockerfile

Having a cloudbuild file, our Dockerfile is fairly simple.

FROM nodeCOPY build buildCOPY server serverCMD [ "node","server/server.js" ]

Advanced

In the advanced section we can add substitution variables, check the approval checkbox and add a service account.

Substitution Variables: We can define our custom substitution variable and use them in cloudbuild.yaml file the way we used the default substitution variables like project id.

substitutions = {
<ke1>= "<value1>"
<key2> = "<value2>"
}

Service account: You can add your own if you need to expose your manual build trigger through user managed service accounts, by default Cloud Build service account is used. You can find a comprehensive example in Terraform documentation here.

Settings

In the Cloud Build Setting section, you can create a worker pool. A worker pool let’s you define custom configurations and custom network. You can set the machine type, the disk size and vpc. The default networks contains the configs preset by Compute Engine.

At the time of writing this tutorial Terraform google_cloudbuild_worker_pool is not a public resource, hence not possible to use, but there is an other way to configure the machine type and disk size. You can do this through options key of build config.

Add options either through cloudbuild.yaml file or inside the build block of Terraform.

build {
step{...}
options {
disk_size_gb = <disk size>
machine_type = "<machine type>"
}
}

Costs

At the time of writing this tutorial, there is a free build plan per day strategy for default machine type use.

Result

After terraform apply you’ll have your Cloud Build Trigger listening on the changes in your repository. Try to commit a change, and go to History section in Cloud Build, you see a new build is triggered. You can follow the steps, and check out the logs, eventually in GCP Container Registry, you’ll see your new image pushed.

--

--