CI/CD: Google Cloud Build — Pass artifacts between steps

Downtown PDX — Ryan Jones

In this article we are going to cover passing artifacts between build steps using Google Cloud Build. This will be a short and straight to the point article not covering the full range of Google Cloud Build capabilities or the Google Cloud Platform.

If you’re interested in CI/CD pipelines, then please check out our other articles which cover varying topics. Here is a short list.

Alright, let’s jump in!

Full file:

Below is the full cloudbuild.yaml file detailing what we will cover in this article. The CI/CD pipeline only consists of two build steps for simplicity and staying focused on passing artifacts between build steps. Enjoy.

steps:
# Step 1:
- name: 'gcr.io/cloud-builders/mvn'
entrypoint: bash
args: ['./scripts/build.bash']
volumes:
- name: 'jar'
path: /jar
# Step 2:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args: ['./scripts/deploy.bash', '$_APP_NAME', '$_ENV_NAME', '$_REGION']
volumes:
- name: 'jar'
path: /jar

Volumes:

Google Cloud Build works by allowing the developer to write a series of steps which define all of the operations to achieve CI or CI/CD. In our case, one of the requirements of our application is that we build jar files and deploy jar files.

Logically these two steps are different enough they should be separated. The benefits of separating these two steps (build and deploy) is that we also have better organization and a much easier path to debugging when things go awry.

What does this look like?

Below you’re seeing the inner workings of how we are handling a CI/CD for one of our clients that has a Java Spring Boot API that we are assisting in the automation and deployment to the cloud.

Notice, the volumes key which is an array. This means we can pass multiple volumes to our build steps which can then be utilized by the next sequential step! Pretty nifty.

steps:
# Step 1:
- name: 'gcr.io/cloud-builders/mvn'
entrypoint: bash
args: ['./scripts/build.bash']
volumes:
- name: 'jar'
path: /jar

In Step #1, we are using a Google Cloud Builder called, mvn or Maven. To utilize a preconfigured docker container built by Google which has all the dependencies required to run maven commands. We use maven to build our jar files and automatically push them up to the cloud if the CI/CD finishes completely.

We are also using a feature called, entrypoint. Which allows us to leverage the maven container by Google while also giving us the ability to run a bash script directly.

entrypoint: bash
args: ['./scripts/build.bash']

The bash script will then do a clean install and create our jar file underneath the /target folder.

# Install/Create .jar
mvn clean install -DskipTests --quiet
# Confirm the .jar was created
ls target/
MyApp-1.0.jar

This is perfect, exactly what we want. However, we are missing a critical piece. We don’t have a way to pass that jar file to the Step #2, the deploy step. Therefore, we need to attach a volume to Step #1.

volumes:
- name: 'jar'
path: /jar

Now we have a volume attached called, jar. Not the most creative name, but it serves it’s purpose (passing a jar files between steps). With the volume now setup we can add a line to our build.bash file which will copy the jar file onto the volume.

# Copy the .jar to our volume
cp target/MyApp-1.0.jar /jar/

Perfect, let’s now take a look at Step #2.

steps:
# Step 2:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args: ['./scripts/deploy.bash', '$_APP_NAME', '$_ENV_NAME', '$_REGION']
volumes:
- name: 'jar'
path: /jar

If you notice the name of this step is different than Step #1. Here we are leveraging a preconfigured docker container that Google setup called, gcloud. It allows us to automatically sync our Google Cloud project and authenticate that project without needing to pass any credentials. Now we can immediately execute commands against our GCP resources without any setup. For instance, decrypting files using Google KMS. Something that we all commonly do when building CI/CD pipelines. One less thing to worry about.

We also don’t need to worry about installing all the dependencies that gcloud needs to run which is another reason to try Google Cloud Build. As now we’ve leveraged maven and gcloud CLI without any complex setup.

Once again, we are using the entrypoint feature to run our bash script called, deploy.bash. This script will be passed a couple of arguments.

args: ['./scripts/deploy.bash', '$_APP_NAME', '$_ENV_NAME', '$_REGION']

Another way to write this, which you may or may not prefer.

args:
- './scripts/deploy.bash'
- '$_APP_NAME'
- '$_ENV_NAME'
- '$_REGION'

The first argument is the path to our bash script. The 2nd, 3rd, and 4th arguments are environment variables. Environment variables are commonly used to make practically any software more dynamic, but it’s especially important when trying to build reusable automation instead of one off scripts.

We configure environment variables when we create a Google Cloud Build trigger. If you’re unfamiliar with Google Cloud Build Triggers, check out our other article.

Finally, we attach the volume the same way we did in Step #1. By attaching a volume called, jar.

volumes:
- name: 'jar'
path: /jar

Now we can add a line to our deploy.bash file which will copy the jar file into our target directory. The target directory in our case was the location we were pointing too for the other deployment code. There was an expectation that we had the jar file at ./target/MyApp-1.0.jar. We are choosing to use a local directory versus referencing the external volume, /jar, because locally developers will not have this volume to pull from.

cp /jar/MyApp-1.0.jar target/

There we go. We just learned how to pass artifacts between Google Cloud Build steps in our CI/CD pipelines. Entirely, leveraging Google Cloud Builders which give us a preconfigured docker container to execute operations on top of without having to worry about all the other headaches involved in most CI/CD solutions currently out there. However, if we ever need to build our own docker containers to run our builds. Google Cloud Build supports that functionality as well. Topic for another article.

Thanks for reading 🎉 🎉

Shameless Plug:

At Serverless Guru, we have a lot of experience in building CI/CD pipelines. We work with companies and internal employees to help create reusable patterns and build automation practices which can be applied across the organization. We are engineers first, which is important.

Lets keep the party going..

If you like this article and want to see more content like this. Please give us a follow here on Medium or share on Twitter @serverlessgurux! Finally, if you want to learn about Serverless then you should definitely check out our courses at training.serverlessguru.com.

What would you like to see next?

We will be releasing new articles about various technologies like Serverless, Docker, Kubernetes, GCP, AWS, Go, Python, and NodeJS to name a few.

If you have any topics you want to see. Comment below because then we can improve and bring more value.

https://serverlessguru.com

Ryan Jones

Founder — Serverless Guru

LinkedIn — @ryanjonesirl

Twitter — @ryanjonesirl

Thanks for reading 😃

If you would like to learn more about Serverless Guru, please follow us on Medium, Twitter, Instagram, Facebook, or LinkedIn!