Hosting CTF challenges on a Kubernetes cluster

Sanskar Jaiswal
Techloop
Published in
7 min readMar 30, 2021

In this article, we will be walking through how to host CTF challenges, reachable via HTTP(s) and TCP on a k8s cluster.

Hosting CTF challenges is tricky, as you’re hosting something which is meant to be poked at and you need to be prepared if some Russian hacker finds a vulnerability and decides to exploit it :)

Using k8s makes sense to host CTF challenges as it is “self healing”. If a challenge Deployment goes down suddenly, the k8s master node will automatically spin up a replacement for it. This translates into minimal downtime. The challenges also need to be scalable and respond to traffic demand in an efficient way.

Deploying the challenges on a k8s cluster using Minikube

I’m gonna assume that you’ve already dockerized all your challenges, if you haven’t, you can refer to this article in our CTF series. Furthermore, kindly make sure you have Minikube and kubectl installed as well.

Once you’re ready with your dockerized challenges, it’s time to go ahead and push it to any container registry of your choice (we went ahead with Docker Hub).

docker build -t chall:latest .
docker push chall

Once your images are pushed to a container registry, we can start writing all the YAML files to configure our cluster to use these images.

YAML config file to create a Deployment for a challenge.
YAML config file to create a ClusterIP service for the above Deployment.

Once, we have these files ready, we can can go ahead and apply these files using the kubectl command.

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Now let’s verify if everything worked like we wanted it to :-) Here’s one way to go about it:

  • Run kubectl get deployments to get a list of all your deployments along with their status. Choose a Deployment you’d like to test.
  • Run kubectl get ep to see all the endpoints (internal and external) of your cluster. Grab an endpoint related to the Deployment you wanna test.
  • Run kubectl get pods and grab the name of any pod belonging to the Deployment you wanna test.
  • Run kubectl exec -t -i {pod-name} -- /bin/bash to open up a shell inside the pod. You can then use curl to directly access the ClusterIP service, since we have the related endpoint.
  • Verify if the output looks okay, if it doesn’t you probably have an issue with your Dockerfile or the YAML files we wrote above, so kindly double check.

You can create a folder with all the deployment and service YAML files, and then just run kubectl apply -f {folder}/ to apply all the files at once.

Setting up ingress-nginx for load balancing

Now that we have our Deployments and ClusterIP Services up and running, we need a way for traffic to reach these from anywhere in the world.

One way to do this would be to use NodePorts instead of ClusterIPs and have a Compute Engine instance running a load balancing tool like HAProxy or Traefik. You can refer to this nice article if that’s how you want to go about it.

You can also use a LoadBalancer service as a part of your k8s cluster to load balance traffic between the nodes. We decided to use an Ingress resource, as it’s easy to setup and pretty convenient, especially for HTTP(s). Exposing ports for TCP connections requires a bit more effort tho :)

An Ingress resource needs a related Ingress controller. The controller is necessary to fulfil the Ingress resource (using a LoadBalancer internally). We decided to go ahead with the NGINX Ingress controller, as it has excellent documentation, community support and NGINX is one of the industry standards for load balancing.

Our web questions, i.e., questions that need HTTP connections, were being served at different paths of our main server (Load Balancer), and questions that needed TCP connections were served by simply exposing the port.

Enable the controller by running minikube addons enable ingress and then type out a YAML file as shown below.

This YAML file takes care of setting up our Ingress service as well as the NGINX ingress Controller for all HTTP connections (for now). If you want all connections to be served over HTTPS, you can refer to this doc for the same.
You can use this file the same way as above by simply running kubectl apply -f ingress-service.yaml

You need to verify if everything’s a-okay by talking to the cluster directly instead of having to curl from inside a pod to a private ClusterIP service endpoint.

  • Run minikube ip to get the public IP address of your Minikube k8s cluster.
  • Run curl {ip-addr}:{port-number} and verify if the output is acceptable.

Exposing ports for TCP only connections

Ingress resources are by default configured to use HTTP(s) connections. But some CTF challenges (such as jails) require TCP only connections. Let’s see how to host such challenges!

When we first install the Ingress resource on our Minikube k8s cluster, it creates a few ConfigMap resources in the kube-system namespace, out of which one is the tcp-services ConfigMap, which stores all the information the Ingress controller needs about handling TCP connections and corresponding ports.

Patch to expose the necessary ports for TCP communication.

The above JSON file can be applied as a patch to the existing tcp-services ConfigMap, which contains instructions for the Ingress controller to open up the necessary ports and the related ClusterIP services, it needs to forward traffic to.

We also need to patch the definition of our Ingress controller Deployment running in the kube-system namespace, to basically specify port mapping.

You can apply these patches by running:

kubectl patch configmap --namespace kube-system tcp-services --patch "$(cat patch.json)"

kubectl patch deployment --namespace kube-system ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)"

You can verify if everything is working by using a tool like netcat or telnet to access the necessary port.

Deploying the k8s cluster on GKE

Now that we have our cluster perfectly setup locally on Minikube, we can go ahead and deploy it on Google Kubernetes Engine (GKE). Creating a cluster is pretty straightforward on GKE, so I am just going to leave the official docs here, which should be all you need. If you’re confused about your node pool configuration, feel free to go through this article about our CTF stats and infra and make your decision accordingly.

Since now our cluster will be running on GKE and not on Minikube, we need to do things a bit differently. First off, we need to install Helm on our GKE cluster. Helm can be thought of as a package manager for Kubernetes. We’ll be using it to install the NGINX Ingress controller.

Connect to the cluster using the gcloud container clusters get-credentials command. Then go ahead, and install Helm following the installation instructions. Now we can install the NGINX Ingress controller using Helm, as shown here.

We used Travis CI to deploy all the images to our GKE k8s cluster. We wrote a bash script which contained a bunch of commands, that Travis was instructed to run every time, we want to deploy a new image to the cluster nodes. I won’t be talking about setting up Travis CI and configuring it to work with GKE as it goes out of the scope of this article, but here’s another article about this which is excellent.

On GKE, we also need to patch a Service named ingress-nginx-controller which, again, got created when we installed the Ingress controller using Helm. As seen below, we are just specifying the selector and related port mappings. You can run it in the same way we ran the patches mentioned above.

One subtle yet important difference that, we noticed that isn’t really mentioned anywhere (not that we could find at least) is that if you install the NGINX Ingress controller using Helm instead of Minikube, all the ConfigMaps (such as the tcp-services one) and the ingress-nginx-controller Deployment get created in the default namespace instead of the kube-system namespace. Hence, we adjusted the commands in our bash script to work accordingly. You can find the bash script here, if you’re interested :D

It’s CTF challenges time!

We can finally start playing our CTF challenges. Grab the public IP address of the Ingress service and send a request to any of your defined paths or try pinging any of the exposed ports, and everything should be working.

Note: GCP assigns the Ingress service an ephemeral IP address, which means your IP address can potentially (and probably will) change. Fortunately, provisioning a static IP address is very easy and free if it’s being used by a GCP Load Balancer.

Thank you for reading this article! If you wish to go through the questions of our CTF or any config files, head over to the GitHub repo and feel free to leave a ⭐️!

If you enjoyed reading this article, do checkout the rest of the articles in our CTF series:

--

--