GitHub Actions Self-Hosted Runner on Kubernetes

Puru Tuladhar
Jun 22 · 5 min read

Deploy a scalable GitHub Actions self-hosted runner on Kubernetes using Helm.

Why Self-Hosted Runner?

Self-hosted runners are ideal for use-cases where you need to run workflows in a highly customizable environment with more granular control over hardware requirements, security, operating system, and software tools than GitHub-hosted runners provides.

Self-hosted runners can be physical, virtual, in a container, on-premises, or in a cloud. In this guide, we’ll deploy it as a container in the Kubernetes cluster in the AWS cloud.

Deploy Kubernetes Cluster (optional)

Source: eksctl.io

If you already have an existing K8s cluster, feel free to skip this step.

In this guide, we’ll deploy a managed K8s cluster on AWS using — Official CLI for Amazon EKS which is written in Go and uses CloudFormation under the hood, and is by far the easiest way to spin up a managed Kubernetes cluster in AWS. See Installing eksctl.

Create Kubernetes Cluster

Our cluster will consist of a single worker node (c6g.large — 2 vCPU, 4GiB RAM) in region with a dedicated VPC. Feel free to modify the cluster config as per your requirements. See more examples configs.

Save the following cluster config as

And run the following command using the above cluster config:

$ eksctl create cluster -f cluster-config.yaml

NOTE: The cluster creation may take upto 15–20 minutes.

2021-06-22 19:14:19 [✔]  EKS cluster "github-actions" in "us-east-1" region is ready

Once the cluster is created and ready, you will find that cluster credentials were added to your kubeconfig in automatically by

Now, verify the cluster connectivity, access and nodes status:

$ kubectl get nodes
$ kubectl get namespaces

Deploy Action Runner Controller using Helm

Helm is a package manager for Kubernetes to easily install and manage Kubernetes applications. See Installing Helm

Helm Logo

What is actions-runner-controller?

action-runner controller operates self-hosted runners for GitHub Actions on the Kubernetes cluster. It provides CRDs (Custom Resource Definition) such as which allows us to easily deploy a scalable self-hosted runners on Kubernetes.

Installation of cert-manager

cert-manager is a required component needed by the actions-runner-controller for certificate management of Admission Webhook.

# Add repository
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
# Install chart
$ helm install --wait --create-namespace --namespace cert-manager cert-manager jetstack/cert-manager --version v1.3.0 --set installCRDs=true
# Verify installation
$ kubectl --namespace cert-manager get all

GitHub Personal Access Token

Next, we need to create a Personal Access Token (PAT) which will be used by the controller to register self-hosted runners to GitHub Actions.

  1. Login to GitHub account and navigate to https://github.com/settings/tokens
  2. Click on button
  3. Select scope.
  4. Click
Fig: Generate Personal Access Token

Now, store the access token in a YAML file called as such:

authSecret:
github_token: REPLACE_YOUR_TOKEN_HERE

Installation of actions-runner-controller

We’re now ready to install the controller using Helm.

# Add repository
$ helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
# Install chart
$ helm install -f custom-values.yaml --wait --namespace actions-runner-system --create-namespace actions-runner-controller actions-runner-controller/actions-runner-controller
# Verify installation
$ kubectl --namespace actions-runner-system get all

Deploy Self-Hosted Runner

We now have everything in-place to deploy a self-hosted runner tied to a specific repository.

First, create a namespace to host self-hosted runners resources.

$ kubectl create namespace self-hosted-runners

Next, save the following K8s manifest file as , and modify the following:

  • Replace with your own repository.
  • Adjust the and as required.

And apply the Kubernetes manifest:

$ kubectl --namespace self-hosted-runners apply -f self-hosted-runner.yaml

Verify the runner is deployed and is in ready state.

$ kubectl --namespace self-hosted-runners get runner

Now, navigate to your repository to view the registered runner.

Fig: Registered Runners

🚀 We’re now ready to give our self-hosted runner a try!

Create a workflow to test your self-hosted runner

Save and commit the following sample GitHub Actions workflow in in your repository where the self-hosted runner is registered.

NOTE: The important part of this workflow is

Now, navigate to the Actions tab where you will see Hello World workflow listed. Let’s manually trigger by clicking Run Workflow

Fig: Manually Trigger Workflow

and voila! 🎉 The workflow has successfully ran on our self-hosted runner, and we can see all the steps and logs.

Fig: Workflow Summary

Clean-up Kubernetes Cluster (optional)

Once, you’re done exploring the self-hosted runner, you can easily destroy the cluster and associated resources like VPC, etc.

$ eksctl delete cluster -f cluster-config.yaml

Output:

2021-06-22 20:16:02 [ℹ]  eksctl version 0.54.0
2021-06-22 20:16:02 [ℹ] using region us-east-1
2021-06-22 20:16:02 [ℹ] deleting EKS cluster "github-actions"
2021-06-22 20:16:06 [ℹ] deleted 0 Fargate profile(s)
2021-06-22 20:16:10 [✔] kubeconfig has been updated
2021-06-22 20:16:10 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-06-22 20:16:23 [ℹ] 2 sequential tasks: { delete nodegroup "ng-1", delete cluster control plane "github-actions" [async] }
2021-06-22 20:16:23 [ℹ] will delete stack "eksctl-github-actions-nodegroup-ng-1"
2021-06-22 20:16:23 [ℹ] waiting for stack "eksctl-github-actions-
2021-06-22 20:18:21 [ℹ] will delete stack "eksctl-github-actions-cluster"
2021-06-22 20:18:22 [✔] all cluster resources were deleted

And remove the dangling offline registered runner as well.

Fig: Removing Registered Runners

Geek Culture

Proud to geek out. Follow to join our 1M monthly readers.