End to end Continuous Delivery on Google Cloud using native services

Giovanni Galloro
Google Cloud - Community
9 min readMay 12, 2023

This article guides you in running an example Continuous Delivery pipeline on GCP from development to production using GCP native services as: Cloud Code, Cloud Build, Cloud Deploy, Anthos Service Mesh and open source tools as Skaffold, Kustomize and Kubernetes Gateway API.

The assets used in this article are available in this Github repository: https://github.com/ggalloro/sw-delivery-on-gcp

Following the instructions below you can experiment an example flow where:

  1. A developer forks the application repo in his Github account
  2. The developer makes a change to the code using Cloud Shell Editor and Cloud Code, the change is immediately deployed in his dev cluster running in Minikube in the Cloud Shell.
  3. When he is happy with the change he opens a pull request to the main repo.
  4. QA team makes a specific comment to the PR and this automatically executes a Cloud Build trigger that builds a container with Skaffold, creates a release on Cloud Deploy and rolls it out in a QA GKE cluster where usability tests can be run.
  5. After the QA team is happy, the PR is merged and this runs another trigger that promotes the release to a Prod GKE Cluster. The Cloud Deploy prod target requires approval so an approval request is triggered, the App Release team checks the rollout and approves it so the app is released in production with a canary release at 50% (in real world a canary release would use a smaller percentage, but we put at 50% here so you can easily experiment the canary from a single client).
  6. After checking the canary release, the App Release team advances the rollout to 100%

What you need

  • A GCP project with GKE, Cloud Build, Cloud Deploy, Artifact Registry APIs enabled
  • A main Google user account with project owner role on the project that will be used as someone from Platform Team / QA / App Release team
  • A Github account
  • An additional Google user account that will be used as the ‘developer’, this account should also have a separate Github account to fork the repo in

Preparation

  1. Create 2 GKE Clusters with Anthos Service Mesh enabled: one for the QA environment and the other for the prod environment, both in the same location (zone or region).

2. Apply K8s Gateway API CRDs to both clusters:

kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
{ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.1" | kubectl apply -f -; }

3. Create an Artifact Registry Repository to store your images

4. Fork this repo to your Github account and clone it locally, this will be used as the application repo for the tutorial.

5. Run setup.sh from the local repo clone and follow prompt to insert your GCP project, cluster names and location, Artifact Registry repository, Cloud Deploy delivery pipeline region. Then commit and push to your fork.

6. Apply gateway.yaml to both clusters to create a Gateway resource using the Istio Ingress Class

7. Create a Cloud Deploy delivery pipeline using the manifest provided (replace yourregion and yourproject with your values):

gcloud deploy apply --file=delivery-pipeline.yaml  --region=yourregion --project=yourproject

This will create a pipeline that has 2 stages: qa and prod, each using a profile with the same name and 2 targets mapping the above clusters to the pipeline stages.

8. Run createrelease.sh to build your image and create your 1st Cloud Deploy release named first-release.

9. Promote your first release to stable phase in prod stage in Cloud Deploy from GCP Console, approve the promotion if needed

10. Create 2 Cloud Build triggers linked to your fork of the Github repo:

  • The 1st trigger must be invoked by a pull request with Comment control enabled and use the build-qa.yaml build config
  • The 2nd trigger must be invoked by a push to the main branch and should use the release-prod.yaml build config

11. Create one additional Chrome profile (or use Chrome Incognito window), this will be used for the developer tasks, from this Chrome profile or window:

  • Log in to the additional Github account
  • Create another fork of the repo (this will be the developer fork in the flow) from the one forked by the main account
  • Log in to Google Cloud Shell
  • Configure personal access token for Github account
  • Clone the fork of the repo locally
  • Move in the local repo folder and launch Cloud Shell Editor with the repo folder added to his workspace with the command cloudshell workspace .

Execution

  1. Open Cloud Deploy in GCP console and explore/show the cd-on-gcp-pipeline delivery pipeline: a release named first-release has been rolled out to qa and prod stages

2. Explore the targets: click on the link to your prod cluster under Deployment Targets, the production GKE cluster GCP console page will open

3. Get the Gateway resource IP of your prod cluster with kubectl get gtw, put it into a browser, you will see that your application is deployed in production:

4. From the developer Cloud shell editor in the developer Chrome window click, in the lower right corner, Cloud Code and then select Control minikube in the upper window, then select Start

5. If asked, click AUTHORIZE on the Authorize Cloud Shell prompt

6. After minikube startup completes, click on the Cloud Code status bar (in the lower left corner) and then select Run on Kubernetes

7. When asked for the Skaffold profile to use choose [default]

8. In the Output pane you see that the build start for the cdongcp-app application image

9. When deployment is complete Skaffold/Cloud Code will print the exposed url where the services have been forwarded, click the link and then Open web preview

10. You see the app front page displaying this message:

11. Now, let’s try to update the application to see the change implemented immediately in the deployment on the cluster, open the app.go file in cdongcp-app folder in Cloud Shell Editor

12. Change the message in row 25 to “cd-on-gcp app updated in target: …”, you should see the build and deployment process starting immediately

13. At the end of the deploy click again on the forwarded url or refresh the browser window with the application to see your change deployed

14. After the developer is happy with the change he wants to commit so, execute:

git add cdongcp-app/app.go
git commit -m "new feature"
git push

15. Go on the developer github page containing the repository and create a pull request

16. You will see that some check fail because the Cloud Build Trigger require a comment from the central repo owner (QA team)

17. From the main browser window (the one with your main account logged in), go to the repository on Github, click on the new feature PR, examine code changes, you are acting as the QA team at the moment

18. In the conversation, write /gcbrun in a new comment, this will make the Cloud Build Trigger to run build-qa.yaml, you will see checks running on Github. As you can see from the build config file, this build will:

  • Build a container image with your updated code using skaffold build
  • Store the image in your Artifact Registry repository
  • Create a Cloud Deploy Release (this will automatically roll out the release in the 1st stage of the pipeline that is the QA Cluster)

19. Go to Cloud Build -> History, you will see a build running, click on it, you will see the logs

20. After the build completes you should be able to see your container image uploaded to your Artifact Registry repository, the image tag will be the repository commit id

21. From the GCP Console, go to Cloud Deploy, you should see your rollout completed (or in progress) to the QA stage of the pipeline

22. Get the Gateway resource IP of your QA cluster with kubectl get gtw, put it into a browser, you will see the updated version of your application deployed in QA environment:

23. Let’s pretend that the QA team performs some usability test now, when they are happy, go back to the Github page from your main account and merge the PR

24. This will cause the execution of the trigger linked to the release-prod.yaml build, promoting the previously created release to the prod environment. If you go back to Cloud Build -> History you should see a new build running

25. Let’s look for a moment at how Cloud Deploy interacts with Skaffold profiles to manage manifest rendering for the different environments. Look at the skaffold.yaml file inside the repository, it has a main manifests: section including the default Kubernetes resource manifests used for deployment and a profiles: section with 2 profiles: qa and dev, pointing to two different folders. Depending on the profile used, Skaffold will use the manifests in the related sections for deployment. If you look at the delivery-pipeline.yaml manifest you see that the Cloud Deploy delivery pipeline has a profile with the same name associated at each stage, this is how Cloud Deploy manages manifests rendering for your application for different deployment environments.

26. From the skaffold.yaml file you can also see that Skaffold is configured to use Kustomize to render manifests. Kustomize is a tool for Kubernetes that enables you to customize resource manifest files without having to edit the original files. It does this by allowing you to create a kustomization file that specifies the changes you want to make to the original YAML files. Kustomize then applies these changes to the original files, leaving the original files untouched. With Kustomize, you can define a base configuration and then apply patches and transformations to generate customized configurations for different environments or use cases. You can learn more about how Kustomize works from the Kustomize project docs. As you can see if you browse the kubernetes folder structure in the repo, in our case what changes is the TARGET variable that has a different value for each environment and the number of replicas for the deployment that is 1 for dev and qa and 3 for prod.

27. After the build completes you will see an approval request in the Cloud Deploy pipeline

28. Click on Review, you will see a rollout that needs approval, click on Review again

29. Click on the Approve button

30. If you go back to the Delivery Pipeline visualization in Cloud Deploy you will see the rollout deployed to canary phase

31. Get the Gateway resource IP of your prod cluster and execute the following command from a terminal (replace x.x.x.x with your gateway IP address): "while true;do curl x.x.x.x;done", you should see responses both from the old and new (canary) version since your canary strategy has been set at 50% in the delivery pipeline, keep the curl command running.

32. In GCP console Cloud Deploy delivery pipeline, click on Advance to stable, then click Advance, your rollout will advance to the stable phase and your application will be completely replaced with the updated version as you can see from the curl responses.

33. In Cloud Deploy you will see the rollout deployed in prod.

Summary

Reading and following the instructions in this article you learned how to:

  • Use Cloud Code and Skaffold to optimize your development loop
  • Automate Continuous Integration tasks with Cloud Build
  • Manage deployment to different environments with Cloud Deploy and Skaffold profiles
  • Use Kustomize to patch your Kubernetes resource manifests for different environments
  • Use a canary deployment strategy with Cloud Deploy

--

--

Giovanni Galloro
Google Cloud - Community

Customer Engineer ar Google specialized on container based runtimes, Continuous Delivery tools and practices and application networking.