How to manually install and configure GitLab Runners on GKE (Google Kubernetes Engine)

Paul W.
OVRSEA
Published in
5 min readMay 21, 2019

When you use GitLab CI/CD, you have access to GitLab Shared Runners for running your jobs. You have absolutely no configuration to do, it’s ready to use.

But Shared Runners have some disadvantages. They are limited in build time (2000 minutes/month on Free Plan), you can’t configure them and they can be pretty slow…

That’s why GitLab lets us host our own runners, they can be hosted wherever you want, locally on your machine, on a server, in a Docker container…

In this tutorial, we will see how to host our runners on Kubernetes and particularly on Google Kubernetes Engine. At Ovrsea, we chose Kubernetes because of its capability to autoscale, when we have a lot of jobs running we want to increase the amount of runners available automatically and in the other hand, when we have no jobs running, we don’t wan’t to pay for it.

There is an option to automatically create a Kubernetes cluster on GKE (Google Kubernetes Engine) on the GitLab interface but this is not what this article is about because it’s somewhat limited and it can be tricky to configure afterwards.

Cluster Creation

First, you have to create a cluster on GKE.

Cluster creation interface on Google Cloud Platform

In the Node Pools options, you have to choose your machine type, I personally chose n1-standard-2 (2 vCPUs and 7.5GB memory) and only 1 node.

Number of nodes and machine type

Click on “More options” to set “Autoscaling” on and choose minimum and maximum nodes you want. This allows us to scale our nodes when there are too many pods in one node (one job in our CI/CD is one pod in a Kubernetes node).

I set 1 node minimum and 10 nodes maximum for autoscaling

If you are not comfortable with Kubernetes autoscaling, I suggest you to read this article https://dzone.com/articles/kubernetes-autoscaling-explained.

The rest of configuration is up to you, I personally let the default settings.

Helm and Tiller Installation

We will install Helm which is a package manager for Kubernetes, and Tiller which is the server who will communicate between Helm and Kubernetes.

First, get a connection to our newly created cluster by clicking “Connect” in the clusters view.

Click “Connect”
And click “Run in Cloud Shell”

You should now have a terminal view in the bottom of the page with a prefilled line, just hit “Enter”.

Download Helm:

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.6.2-linux-amd64.tar.gz

Extract it:

$ tar zxfv helm-v2.6.2-linux-amd64.tar.gz$ cp linux-amd64/helm .

Set authorizations for Tiller:

$ kubectl create serviceaccount --namespace kube-system tiller$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Install Tiller:

$ ./helm repo add gitlab https://charts.gitlab.io$ ./helm init

Configure your values.yml

Now, create a values.yml file, it will contain all the configuration for the GitLab Runners.

$ touch values.yml$ nano values.yml

You can see at https://gitlab.com/charts/gitlab-runner/blob/master/values.yaml all available variables but I will cover the most important ones.

# values.ymlgitlabUrl: https://gitlab.com/
runnerRegistrationToken: <your-GitLab-Runners-Registration-Token>
concurrent: 20
rbac:
create: true
clusterWideAccess: false
runners:
image: ubuntu:18.04
privileged: false
cache:
cacheType: s3
cacheShared: true
s3ServerAddress: s3.amazonaws.com
s3BucketName: <your-S3-bucket>
s3BucketLocation: us-central-1
s3CacheInsecure: false
secretName: s3access
builds:
memoryRequests: 4000Mi

gitlabUrl: Don’t change this, it always be https://gitlab.com/

runnerRegistrationToken: You can get this token in your CI/CD settings on GitLab, in the Runners section.

concurrent: The number of concurrent jobs you can run at the same time.

rbac: For RBAC support, don’t change it.

runners: Configuration for deploying the runners, see below.

image: Default container image.

privileged: For executing docker commands in the containers, don’t change it as we don’t need it.

cache: As jobs will run in different pods and nodes, if you use cache for saving dependencies between jobs, you have to save it somewhere else. You can use either GCS on Google Cloud Platform or S3 on AWS. I personally use S3. You will need a secret file filled with your credentials, you can create it with the following commands:

For S3:
$ kubectl -n <your-cluster-name> \
create secret generic s3access \
--from-literal=accesskey="YourAccessKey" \
--from-literal=secretkey="YourSecretKey"
For GCS:
$ kubectl -n <your-cluster-name> \
create secret generic gcsaccess \
--from-literal=gcs-access-id="YourAccessID" \
--from-literal=gcs-private-key="YourPrivateKey"

builds: Configuration for the builds, you can specify here how much memory and CPU your jobs need. If a node is out of memory or CPU usage, the cluster will create another node for you, thanks to autoscaling.

You can now apply this configuration to your cluster:

$ ./helm install --namespace <your-cluster-name> --name gitlab-runner -f values.yml gitlab/gitlab-runner

Tadaa ! Your runner is now ready to use !

Success message
Now, you can see your GitLab Runner in the Workloads menu

If you need to change the current configuration, you just have to edit your values.yml and upgrade your runner with the following command:

$ ./helm upgrade --namespace <your-cluster-name> -f values.yml gitlab-runner gitlab/gitlab-runner

Link GitLab with your runner

GitLab should now be linked to your runner if you specified the good registration token in your values.yml file.

Runners in GitLab’s CI/CD settings

You can now trigger your jobs and see pods in action in the GKE Workloads menu.

--

--