Similarities and differences between GitLab CI and Cloud Build

Genesis Alvarez
Stash-consulting
Published in
12 min readAug 20, 2020

Glossary

  • SCM: Source Code Management.
  • CI: Continuous Integration.
  • CD: Continuous Delivery.
  • GCP: Google Cloud Platform.

Continuous Integration is the practice of integrating changes from different developers in the team into a shared repository, each integration can then be verified by an automated build and automated tests. There are many CI tools or service out there, you can look at CloudBuild, GitLabCI, CircleCI, Jenkins, or many others.

I’m going to be writing about the Similarities and differences between GitLab CI and Cloud build; how the pipeline looks like, connection to the repository, and others. In order to create this blog, I used the generate RSS feed repository for the purpose to create a CI / CD workflow.

If you are interested in know more about the RSS feed project, please read Troubleshooting Terraform on a serverless world where I explained how I created the serverless infrastructure with Terraform in GCP and I showed you during the implementation the decision I made to fix problems.

In this article I’m going to be leveraging the following technologies:

  • Cloud Build.
  • Compute Engine.
  • GitHub.
  • Gitlab & Gitlab CI.
  • Terraform & Terraform Cloud.

Requirements:

I’ll try to make this as clear as possible, but you’ll need at least a basics knowledge in the following topics to be able to follow this article efficiently:

  • Basic Linux.
  • Basic Docker.
  • Cloud Computing.

Before you begin:

  • Create a project in the Google Cloud Console and set up the billing.
  • Create repositories on GitHub and Gitlab.

Using GitHub for SCM-only and CloudBuild for CI/CD

Cloud Build is a service that executes your builds on Google Cloud Platform’s infrastructure. You can Integrate your working repositories in order to start creating build triggers; Build triggers automatically build containers based on source code or tag changes in a repository, you cannot manually start a new build using Google Cloud Console, however, you can retry a previous build.

You must first connect Cloud Build to your source repository before building the code in that repository.

Select the repository where you’ve stored your source code, click Continue, and authenticate to your source repository with your username and password.

From the list of available repositories, select the desired repository, then click the Connect repository button.

Click the Add trigger button to continue creating a build trigger to automate builds for the source code in the repository.

You need either a Dockerfile or a Cloud Build config file configured using a YAML format. In this blog, I’m going to use the Cloud Build build config to build my containers. If you selected the Cloud Build config file as your build config option, you can add variable values to substitute specific variables at build time.

Let me show you my cloudbuild.yaml file for this project, the following piece of information explains how works in each field for my file:

  • The steps field in the config file specifies a set of steps that you want Cloud Build to perform.
  • The id field sets a unique identifier for a build step.
  • The name field of a build step specifies a cloud builder, which is a container image running common tools. You use a builder in a build step to execute your tasks. In my case, I’m usingstashconsulting/terraform-docker:entrypoint-latest version that’s a custom image that I created.

This image was born when we wanted to manage the infrastructure with terraform and we required to build a docker image for a project but it was not possible because the terraform image does not have docker; If you’re interested in knowing more about this project you can find the source code at terraform-docker repository on GitHub. Also, you can find the custom images on stashconsulting/terraform-docker repository on docker hub. Please tell others about this project. 📢 Hehe..!

  • The args field of a build step takes a list of arguments and passes them to the builder referenced by the name field.
  • The entrypoint in a build step specifies an entrypoint if you don't want to use the default entrypoint of the builder.

Look at line 13, I use gsutil rsync for the purpose of managing the terraform’s state. The gsutil rsync command makes the contents under destination URL the same as the contents under source URL, by copying any missing files/objects or those whose data has changed.

We have created an automated CI/CD workflow that starts new builds in response to code changes in any branch.

Something to keep it mind GitHub App builds started more than 3 days ago cannot be rebuilt.

To retry a previous build:

  • Select your project and click Open.
  • Open the Build History page in the Cloud Build section in the Google Cloud Console.
  • In the Build history page, click on a build that you wish to rebuild.
  • Click Rebuild.

Using GitHub for SCM-only and GitLab CI for CI/CD

GitLab CI/CD is part of GitLab where you execute your builds specifying a pipeline configured using a YAML format called .gitlab-ci.yml within the project.

First, I connected my GitHub repository with GitLab CI using a personal access token. To create a Personal Access Token you have to authenticate to your source repository. This token will be used to access your repository and push commit statuses to GitHub. The repo and admin:repo_hook should be enabled to allow GitLab access to your project, update commit statuses, and create a web hook to notify GitLab of new commits.

You have a couple of options to start a new project. To perform an authorization with GitHub to grant GitLab access to your repositories, you choose Run CI/CD for an external repository.

You have two options to connect repositories, I choose GitHub Option.

Paste the token into the Personal access token field and click List Repositories. Click Connect to select the repository.

GitLab maintains a synced copy of the GitHub repository.

To perform the build you need GitLab Runners, Shared Runners are available to every project in a GitLab instance. In my case, I’m going to set up a specific Runner, you need to install GitLab Runner on any platform for which you can build Go binaries, including Linux, macOS, Windows, FreeBSD, and Docker. I’m going to create an instance based on Debian in Compute Engine.

To download the appropriate package for Debian or Ubuntu:

curl -LJO https://gitlab-runner-downloads.s3.amazonaws.com/latest/deb/gitlab-runner_amd64.deb

You can check other releases here.

You have to install Git or you’ll get this error at the moment to install the package:

dpkg: dependency problems prevent configuration of gitlab-runner: gitlab-runner depends on git; however: Package git is not installed.

First, use the apt package management tools to update your local package index. With the update complete, you can download and install Git:

sudo apt update
sudo apt install git

If you get this message E: Unmet dependencies. Try 'apt--fix-broken install' with no packages(or specify a solution) run the following:

sudo apt --fix-broken install -y

Install the package for your system as follows:

sudo dpkg -i gitlab-runner_amd64.deb

Now we have to register a runner. Registering a Runner is the process that binds the Runner with a GitLab instance. Before registering a Runner, you need to obtain a token for a project-specific Runner

  • Select your Project.
  • Go to Settings in the left navigation.
  • Click on CI/CD.
  • Expand the Runners section.
  • Read the Set up a specific Runner automatically section.

To register a Runner under GNU/Linux run the following:

sudo gitlab-runner register

You’ll be asked for the following:

  • The gitlab-ci coordinator URL (e.g. https://gitlab.com )
  • The gitlab-ci token obtained in the runner's section for this runner.
  • The gitlab-ci description for this runner.
  • The gitlab-ci tags for this runner.
  • The executor (e.g. ssh, docker+machine, docker-ssh+machine, kubernetes, docker, parallels, virtualbox, docker-ssh, shell: docker).
  • If you chose Docker as your executor, you’ll be asked for the default image to be used for projects that do not define one in .gitlab-ci.yml.

For my project, I have a docker builder module on terraform that creates a tag to the kong image and pushes it to the Container Registry. I choose Docker as executor, so you need to install Docker.

So, I have to mount a docker volume when creating a register or we’ll get this error at the moment to build the kong image.

exit status 1. Output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Also, I choose to authenticate with my GCP credentials when creating the register instead of specifying in the pipeline.

I ran the following commands in my instance.

sudo gitlab-runner register -n \
--url https://gitlab.com/ \
--registration-token my_token \
--executor docker \
--description "runner with gcp credentials" \
--docker-image "docker:19.03.12" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock \
--env "GOOGLE_CREDENTIALS=$service_account"

You can add your environment variables that the pipeline uses on setting > CI/CD > Expand variable section.

If you choose to authenticate with the GCP credentials specifying in the .gitlab-ci.yml put the following in the file.

script:    
- echo -n $service_account > credentials.json
- gcloud auth activate-service-account --key-file credentials.json

Now go to setting > CI/CD > Expand runner section to see your configured runner in the Specific Runners side. First, the runner will show a strange status for a couple of minutes until it is “available”, the reason is the runner is not connected yet, it only happens when I add the environment variable of the google credentials, I don’t know that details in depth. Look at the example image below taken from the issue page.

If you want you can edit the runner for the purpose to add the tags or others. I just edited it.

“Voilà”

In the Compute Engine instance run the following that runs the multi runner service if you don't do the job will be pending state and will be waiting to be picked by a runner:

sudo gitlab-runner run

Then in GitHub, add a .gitlab-ci.yml to configure GitLab CI/CD.

Now let me show you my .gitlab-ci.yml file for this project which was adapted to do the same as cloudbuild.yaml. The following piece of information explains how works in each field for my file:

  • The Jobs which define what to do.
  • The stages defines a job stage.
  • The deploy field is a stage.
  • The image field is the name of the Docker image the Docker executor runs to perform the CI tasks.
  • The nameis the full name of the image that should be used.

With the image stashconsulting/terraform-docker:gcloud-lastest version you can run bash based in ubuntu, terraform and gcloud commands.

  • The entrypoint field is a command or script that should be executed as the container’s entrypoint.
  • The script field is the shell script that is executed by Runner.
  • The tags field is a list of tags which are used to select Runner.

Jobs are executed by Runners. Multiple jobs in the same stage are executed in parallel if there are enough concurrent runners. If all jobs in a stage succeed, the pipeline moves on to the next stage. If any job in a stage fails, the next stage is not (usually) executed and the pipeline ends early. In my case, we have only one stage.

To execute a pipeline manually:

  • Navigate to your project’s CI/CD > Pipelines.
  • Click on the Run Pipeline button.
  • On the Run Pipeline page:
  • Select the branch to run the pipeline.
  • Enter any environment variables required for the pipeline run.
  • Click the Create pipeline button.
  • The pipeline now executes the jobs as configured.

Using GitLab for SCM & CI/CD

As well you can use only GitLab to storage the code and create the CI / CD workflow, select Projects (in the top navigation bar) > Your projects > select the Project you’ve already created.

If you want to configure the pipeline select CI / CD in the left navigation to start setting up CI / CD in your project.

The steps to create the GitLab Runner, register a runner, handle the terraform state specified in Using GitHub for SCM-only and GitLab CI for CI/CD, and the pipeline will the same only you won’t need run CI/CD for an external repository.

Other ways to handle the status of terraform

Now, we have to handle the status of terraform there are two options, the first one is to add a backend.tf file to define the remote backend in your project.

terraform {
backend "gcs" {
bucket = "my-bucket"
}
}

And resolving the error locking state:

Error: Error locking state: Error acquiring the state lock: writing "gs://my-bucket/default.tflock" failed: googleapi: Error 412: Precondition Failed, conditionNotMetLock Info:
ID: 1595382417218702
Path: gs://tf-state-backup-terraform/default.tflock
Operation: OperationTypeApply
Who: root@runner-9r6gkzht-project-19731653-concurrent-0
Version: 0.12.18
Created: 2020-07-22 01:46:57.177190723 +0000 UTC
Info:
Terraform acquires a state lock to protect the state from being written by multiple users at the same time. Please resolve the issue above and try again. For most commands, you can disable locking with the "-lock=false" flag, but this is not recommended.

Or using the HTTP backend

terraform {
backend "http" {
address = "http://myrest.api.com/foo"
lock_address = "http://myrest.api.com/foo"
unlock_address = "http://myrest.api.com/foo"
}
}

The second one is using terraform cloud which is an application that lets me manage Terraform runs (plans and applies) in a consistent and reliable environment. You have to create an account. Terraform Cloud will prompt you to create a new organization after you sign in for the first time.

Also, create a new workspace by choosing “Workspaces” from the main menu, and then the “New Workspace” button. Then I choose GitHub and authenticate.

We make the configurations in order for the state is handled automatically on Terraform Cloud, and my CI /CD will take care of the applies.

We run terraform login command for the purpose to obtain automatically and save an API token for Terraform Cloud, we have to adapt this part in the pipeline. You need Terraform v0.13.0 or higher for the correct operation of the command.

We create acredentials.tfrc.json file inside of .terraform.d folder for the purpose of terraform store the token.

{
"credentials": {
"app.terraform.io": {
"token": "TOKEN_TO_BE_REPLACED"
}
}
}

We modify the pipeline look at the lines 13 and 14. Sed is a stream editor, meaning you can search and replace strings in files and use regex if needed. I’m using variables as replace and search values sed -i “s/TOKEN_TO_BE_REPLACED/${token}/g" to the location of the file. Then, creating the folder and moving the credentials file.

Now we add this piece of code on the top of main.tf file and we’re finished!

terraform {
backend "remote" {
organization = "my-organization"
workspaces {
name = "my-workspace"
}
}
}

Jobs succeded!

We can see the status file in Terraform Cloud.

--

--