Gitlab Continuous Deployment Pipeline to GKE with Helm

Raju Dawadi
Google Cloud - Community
6 min readMar 30, 2019

I shared about using Google Cloud Build service from Gitlab Community Edition(CE) using shell runner. You can grab the idea from this post on building docker image and storing them on Google Container Registry(GCR).

In this post, we will aim to get familiar with the pipeline from docker build to deployment to GKE(Google Kubernetes Engine) which is applicable for both CE and EE(Enterprise Edition) of Gitlab.

image source: blog.pactosystems.com

So, the flow would be:

  1. Create a Kubernetes Cluster on GKE
  2. Configure service account credential for accessing cluster and use cloud build service
  3. Create gitlab project, add gitlab-ci pipline along with cloudbuild & dockerfile
  4. Install helm chart for easing the deployment
  5. Getting continuous deployment up and running
  6. Securing the GKE cluster

Let’s start by creating GKE cluster. Head over to the cluster creation page on Google Cloud Project, choose favorite zone, add nodes (you may prefer auto-scaling and pre-emptible nodes if using for test), better enable VPC native cluster, tune rest of the settings as per need and hit create.

Create GKE Cluster

In a few minutes, the cluster will be ready.

Now, it is time to create a service account for accessing GKE resources from Gitlab along with triggering codebuild. Head over to service account addition page of IAM (Identity and Access Management) and admin for adding new service account. Give a suitable name for it, attach permissions and create a json key and hit done.

Update:

Instead of granting Storage admin and project viewer permission, we can create a separate GCS bucket for storing the cloud build logs and grant objectCreator permission to the service account by following steps:

gsutil mb gs://[BUCKET-NAME]/gsutil iam ch serviceAccount:[SERVICE-ACCOUNT-ID]:objectCreator gs://[BUCKET-NAME]eg.
gsutil mb gs://gitlab-gke-cloudbuild-logs/
gsutil iam ch serviceAccount:gitlab-gke@pv-lb-test.iam.gserviceaccount.com:objectCreator gs://gitlab-gke-cloudbuild-logs

We can take benefit of Cloud Shell for the above commands or add the permission through the console.

Time to go Gitlab !!

Let’s create a new project on gitlab.com and add a few files on it: Dockerfile, cloudbuild.yaml, .gitlab-ci.yml and project resource file. Alongside, we will be keeping the service account credential file as Gitlab environment variable. As multi line is not supported, let’s do base64 encryption of the file and use the encoded value on the environment variable which we will decode while triggering Gcloud resources. Encode the file:

base64 /path/to/credential.json | tr -d \\n

Add the variable

Here is our simple Dockerfile:

I am using multi-stage build with artifact running on distroless base image which is lightweight and contain only the application and runtime dependencies.

Language focused docker images, minus the operating system

Cloud Build build configuration file with tasks to trigger Google Cloud Build service. We are pushing the docker image to Google Container Registry(GCR).

And a simple go http server, main.go:

We can give a test to the dockerfile from local with a build and run:

$ docker build -t gitlab-gke .
$ docker run -d -p 8080:8080 --name gitlab-gke gitlab-gke

Send few requests to the server from browser: http://localhost:8080/raju

Response: Hello, raju!

Configuring Gitlab CI

We need two main steps on gitlab-ci.yml which is the defacto file for using Gitlab pipeline service:

  1. Publish: This step uses cloudbuild spec file for performing build on cloud build environment which uses Dockerfile for building image and later pushed to gcr.
  2. Deploy: For deployment of newly built image to GKE cluster, we configure it to run when there’s a new commit on master branch. You can create a trigger based on your need like, after creating tag, manual trigger etc.

The following stage publishes the image:

publish-image:
stage: publish
image: dwdraju/alpine-gcloud
script:
- echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml --substitutions BRANCH_NAME=$CI_COMMIT_REF_NAME
only:
- master

Here, we are using a simple alpine linux based dwdraju/alpine-gcloud which has Google cloud sdk and can access Gcloud resources by adding the credential file.

If all is good and merged all codes on master branch, it should trigger a new job and publish new image on Google container registry.

Create Helm Chart

Helm is a package manager for Kubernetes which eases the creation as well as versioning and management of k8s resources. It has been under CNCF since a few months ago.

Then, create a new chart for our application. We are using Helm version 3 which is latest and is more like kubectl binary that doesn’t need tiller agent.

$ helm create gitlabgke

It adds a few Kubernetes manifest files.

Get Kubernetes Cluster credential for accessing GKE and installing helm chart

$ gcloud container clusters get-credentials [cluster-name] --zone [cluster-zone] --project [project-name]

Response:

Fetching cluster endpoint and auth data.
kubeconfig entry generated for gitlab-cluster.

Now, we need to adjust a few configs on the default helm chart:

  1. Service Type: NodePort
  2. InternalPort: 8080(or as per your application port)
  3. Add default route if you don’t have a domain name and have to access the service with IP address of global loadbalancer
  4. Image repository and tag(We are using gcr.io/[project-name]/gitlab-gke:master) for now

Here is the commit for the changes: https://gitlab.com/dwdraju/gitlab-gke/commit/9f02cf83f39be71b62a8ed07589bfc538bc43349

Time to install Helm Chart

helm install gitlabgke .

In few minutes, it will get the pod running, health check passed, new ingress ip which can be obtained by kubectl get ing . You can now access the ip to get hello :)

http://[ingress-ip]/myname

Back to Gitlab CI for Continuous Deployment

We need to add a new stage deploy as we already have publish stage which sends new image to container registry.

deploy-image:
stage: deploy
image: dwdraju/gke-kubectl-docker
script:
- echo $GCLOUD_SERVICE_KEY | base64 -d > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl set env deployment/$K8S_DEPLOYMENT CI_COMMIT_SHA=$CI_COMMIT_SHA
- kubectl set image deployment/$K8S_DEPLOYMENT $K8S_IMAGE=gcr.io/$GCP_PROJECT_ID/$IMAGE_NAME:$CI_COMMIT_REF_NAME
only:
- master

So, our final gitlab-ci.yml file looks like this:

If everything is setup correctly, we will get the gitlab jobs succeeding.

We simply changed the image here after setting the environment variable CI_COMMIT_SHA but we can use helm to upgrade the version by changing the tag value on values.yml file of the chart.

$ helm upgrade gitlabgke .

You can try with helm upgrade on the pipeline. For that, you can use my helm-docker image which can be accessed through this github repo for usage example.

Securing Cluster

  • If you are using self hosted CE gitlab, enable Master authorized networks on the GKE cluster and whitelist Gitlab ip address.
  • For giving access to the pods, its better to create a specific service account for that. Here is the commit for configuring that.
  • We are using distroless base image in this example which might not fit for all cases, but its better to use minimal docker image like alpine to reduce the chance of attack and minimize docker image size.
  • GKE has started offering istio enabled cluster which provides security features with strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA). Give a try and you will love istio.

That’s all for now, if you have better ways to robustness, feel free to drop words on comment. You can find me on linkedin and twitter.

--

--