Implement DevOps — Google Cloud Challenge Lab Walkthrough

Dazbo (Darren Lester)
Google Cloud - Community
10 min readFeb 12, 2024

Google provides an online learning platform called Google Cloud Skills Boost, formerly known as QwikLabs. On this platform, you can follow training courses aligned to learning paths, particular products, or particular solutions.

One type of learning experience on this platform is called a quest. This is where you complete a number of guided hands-on labs, and then finally complete a Challenge Lab. The challenge lab differs from the other labs in that goals are specified, but very little guidance on how to achieve the goals is given.

I occasionally create walkthroughs of these challenge labs. The goal is not to help you cheat your way through the challenge labs! But rather:

  • To show you what I believe to be an ideal route through the lab.
  • To help you with particular gotchas or blockers that are preventing you from completing the lab on your own.

If you’re looking for help with challenge lab, then you’ve come to the right place. But I strongly urge you to work your way through the quest first, and to try the lab on your own, before reading further!

With all these labs, there are always many ways to go about solving the problem. I generally like to solve them using the Cloud Shell, since I can then document a more repeatable and programmatic approach. But of course, you can use the Cloud Console too.

The “Implement DevOps” Challenge Lab

This lab predominantly tests your knowledge of:

  • Google Source Repos — i.e. managed private Git repos
  • Cloud Build, including triggers— Google’s Continuous Integration platform
  • Git
  • Google Kubernetes Engine (GKE)

The walkthrough below is mainly based on using Cloud Shell.

Scenario

We are being asked to build a CI/CD pipeline for the fictitious Cymbal Superstore. Specifically, we’re given these tasks:

  • Create a GKE cluster based on a set of configurations provided.
  • Create a Google Source Repository to host your Go application code.
  • Create Cloud Build Triggers that deploy a production and development application.
  • Push updates to the app and create new builds.
  • Roll back the production application to a previous version.

Tasks

Task 1 — Create the lab resources

Start by launching Cloud Shell from the console. Then, follow these steps:

# Authenticate
gcloud auth list

# Some useful initial setup
export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
export REGION=<enter your supplied region>
export ZONE=<enter your supplied zone>
gcloud config set compute/region $REGION

# enable APIs
gcloud services enable container.googleapis.com \
cloudbuild.googleapis.com \
sourcerepo.googleapis.com

# Add Kubernetes Dev role to the Cloud Build service account
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$(gcloud projects describe $PROJECT_ID \
--format="value(projectNumber)")@cloudbuild.gserviceaccount.com \
--role="roles/container.developer"

# Initial git config
git config --global user.email <your-lab-email>
git config --global user.name <name>
Setting up your git email and username
# Create a Google Artifact Registry called my-repository in the specified region
gcloud artifacts repositories create my-repository \
--repository-format=docker \
--location=$REGION

# Create a GKE Standard cluster called hello-cluster
# It must be zonal, with 3 nodes
# Min nodes=2, max nodes=6, and cluster autoscaler enabled
# Release channel=Regular, and with cluster version specified
gcloud container clusters create hello-cluster \
--zone $ZONE \
--num-nodes=3 --min-nodes=2 --max-nodes=6 --enable-autoscaling \
--release-channel=regular \

# We can verify it
gcloud container clusters list

We’re asked to create a GKE Standard cluster. Of course, the default for GKE is now to create an Autopilot cluster. However, if we create a zonal cluster, it will be deployed as GKE Standard. Also, we don’t have to specify the cluster version, since it will automatically be deployed with a sufficiently recent version.

The GKE cluster creation will take several minutes to complete.

GKE cluster has been created

Now we need to obtain the credentials for our cluster and create two namespaces.

Namespaces are intended for use in environments with many users spread across multiple teams or projects. They allow us to create resources with the same name in different namespaces. For example, we can create separate namespaces for different environments. And here, that’s exactly what we’re doing: we’re creating a namespace for dev, and another for prod.

gcloud container clusters get-credentials hello-cluster --zone $ZONE
kubectl create namespace prod
kubectl create namespace dev

This task is now complete!

2. Create a repository in Cloud Source Repositories

Here we’re creating a git repo in Cloud Source Repos. We create the remote repo, clone it to our Cloud Shell instance, and then we update the local repo.

gcloud source repos create sample-app

# clone the empty repo, and then copy in sample code
cd ~
gcloud source repos clone sample-app
gsutil cp -r gs://spls/gsp330/sample-app/* sample-app
Cloning the repo and copying in sample code from a bucket
# update your region and zone placeholders
for file in sample-app/cloudbuild-dev.yaml sample-app/cloudbuild.yaml; do
sed -i "s/<your-region>/${REGION}/g" "$file"
sed -i "s/<your-zone>/${ZONE}/g" "$file"
done

# Perform an initial commit and push to master
cd sample-app
git add .
git commit -m "Initial commit"
git push origin master

# Create dev branch and commit to dev
git checkout -b dev
echo "Some dev work" > README.md
git add .
git commit -m "Dev work"
git push origin dev
Commit and push to master
Commit and push to dev

Let’s take a look in Cloud Source Repos, using the Cloud Console:

Branches in Google Cloud Source Repos

You can see there are two branches.

3. Create the Cloud Build Triggers

We will now create two triggers:

  • The first trigger — sample-app-prod-deploy — listens for changes on the master branch and builds a Docker image of your application, pushes it to Google Artifact Registry, and deploys the latest version of the image to the prodnamespace in your GKE cluster.
  • The second trigger— sample-app-dev-deploylistens for changes on the dev branch and builds a Docker image of your application, pushes it to Google Artifact Registry, and then deploys the latest version of the image to the dev namespace in your GKE cluster.
gcloud builds triggers create cloud-source-repositories \
--name="sample-app-prod-deploy" \
--repo="sample-app" \
--branch-pattern="^master$" \
--build-config="cloudbuild.yaml"

gcloud builds triggers create cloud-source-repositories \
--name="sample-app-dev-deploy" \
--repo="sample-app" \
--branch-pattern="^dev$" \
--build-config="cloudbuild-dev.yaml"
Creating Cloud Build triggers

After setting up the triggers, any changes to a branch triggers the corresponding Cloud Build pipeline, which builds and deploys the application as specified in the cloudbuild.yaml files.

We can verify the triggers have been created, by looking at Cloud Build → Triggers, in the Console:

Verifying the triggers

4. Deploy the first versions of the application

First we’ll build the first development deployment. Open up Cloud Shell Editor, and take a look at cloudbuild-dev.yaml .

Update all instances of <version> to “v1.0” (without the quotes). And copy the container image name:

Editing source files in Cloud Shell Editor

Edit dev/deployment.yaml and update <todo> with the correct container image name. We need the container image name in dev/deployment.yaml and cloudbuild-dev.yaml to match. Also update the PROJECT_ID variable.

Update the image path

Now commit the changes to the dev branch.

git add .
git commit -m "Updated dev version and container"
git push origin dev

This will automatically trigger the build job. In the Cloud Build dashboard, you’ll see the triggger has started the build:

Triggering the build in Cloud Build

Once the pipeline has completed we can verify the development-deployment application was deployed into the dev namespace.

kubectl get deployments -n dev

Note how we have to add -n dev when we want to retrieve an object in a particular namespace.

Viewing the deployments

We’re now asked to expose the development-deploymentto a LoadBalancer service named dev-deployment-service on port 8080, and set the target of the container to the one specified in the Dockerfile. So we need to create a service to expose the deployment. There’s no existing service yaml file. We could create one, but the easiest way is just to run this imperative command:

# Expose the deployment as a LoadBalancer service on 8080
kubectl expose deployment development-deployment -n dev \
--name=dev-deployment-service --port 8080 --type LoadBalancer

# We can get the service external IP address like this
kubectl get svc dev-deployment-service -n dev

Again, note the use of the -n dev namespace parameter.

The service, including its external IP

It takes a couple of minutes to provision the LB with the external IP address. Once done, we can navigate to the URL, e.g. http://35.221.28.200:8080/blue. It looks like this:

Opening the page from the browser

Now we’ll build the first production deployment.

# switch to master branch
git checkout master

Back in the Editor, update sample-app/cloudbuild.yaml and replace <version>> with “v1.0” (without the quotes). Again, save the full container image path.

Update container image name and PROJECT_ID in prod/deployment.yaml .

git add .
git commit -m "Updated prod version and container"
git push origin master

Now check Cloud Build and check the the deploy pipeline has run.

Verifying the build has run in Cloud Build

Then verify the production-deployment application was deployed into the prod namespace.

# Verify the deployment was deployed into the namespace
kubectl get deployments -n prod
Viewing the production deployment

Now we create another service to expose the deployment:

# Expose the deployment as a LoadBalancer service on 8080
kubectl expose deployment production-deployment -n prod \
--name=prod-deployment-service --port 8080 --type LoadBalancer

# We can get the service external IP address like this
kubectl get svc prod-deployment-service -n prod

5. Deploy the second versions of the application

Switch back to the dev branch:

# switch to dev branch
git checkout dev

Now we update main.go as described in the instructions. I.e. we add a handler for the /red URL.

Then update cloudbuild-dev.yaml and update the Docker image version to v2.0. Then also update the version in /dev/deployment.yaml .

Now we can commit and push again:

git add .
git commit -m "v2"
git push origin dev

We can verify that the build has run in the Cloud Build History page. You should see a second invocation of the sample-app-dev-deploy trigger:

Viewing the build history in Cloud Build

Check the /red entry point is working:

Viewing the /red end point in our browser

Switch back to the master branch:

# switch to master branch
git checkout master

Now make the requested changes to main.go, just as before. And update cloudbuild.yaml and prod/deployment.yaml to reflect v2.0.

Now commit and push:

git add .
git commit -m "v2 prod"
git push origin master

Check the build has run in the Cloud Build History page, and verify the deployment has been deployed to the prodnamespace with the v2.0 image.

# check the image version
kubectl describe deployment production-deployment -n prod
Verifying the image version used by the deployment

Now we can test the service. If you’ve forgotten the IP address, here’s how you can retrieve it:

kubectl get svc prod-deployment-service -n prod

6. Roll back the production deployment

We’re asked to rollback our production deployment to the previous version. Here I’m going to do it with the Cloud Console.

Open Cloud Build → Dashboard. Click on Build History: View all for the sample-app-prod-deploy:

Viewing Build History in Cloud Build

Select the earlier build, then click Rebuild:

Rebuilding a previous build

The build takes a minute or so. If we now go to the /red entry point, we should find the page now returns a 404.

The endpoint is now gone

Conclusion

And we’re done! Not too tricky.

I hope this has been a helpful walkthrough, guiding you through some basic CI/CD pipeline development in Google Cloud.

Before You Go

  • Please share this with anyone that you think will be interested. It might help them, and it really helps me!
  • Feel free to leave a comment 💬.
  • Follow and subscribe, so you don’t miss my content. Go to my Profile Page, and click on these icons:
Follow and Subscribe

--

--

Dazbo (Darren Lester)
Google Cloud - Community

Cloud Architect and moderate geek. Google Cloud evangelist. I love learning new things, but my brain is tiny. So when something goes in, something falls out!