Structuring your Repository for CTF challenges

Rohan Mukherjee
csictf
Published in
8 min readJul 26, 2020
Structuring by category

Overview

In this article, I will go over the structure of the repository containing the challenges for csictf 2020, the process of contribution, and the importance of containerizing challenges.

TL;DR

I will refer to the ctf-challenges repository under csivitu for explaining the structure we followed in csictf 2020. Here’s a link to the repository in case you want to check it out yourself.

Getting Started

csictf 2020 was a 4 day CTF and had a little over 2700 participants. It was the first time we were holding a CTF at this scale, hence, we needed to follow a proper plan. We aimed to make around 60 challenges in about a month, alongside other things like infrastructure and sponsorship (which we will be having other articles about), so it would be really difficult had we not planned out the repository structure and the contribution process early on. We had to keep in mind the following things:

  • The folders must be structured in a way that we can automate the deployment process so that if there was some issue with a challenge, we could deploy it by just pushing to the master branch of the repository.
  • There were broadly three types of challenges:
    a) Challenges that had to be deployed on a server: These would be challenges in which the user would have to connect to a netcat server or log in through ssh.
    b) Challenges consisting of files to be given to the user: An example of this kind would be forensics tasks where the user would have to find the flag from a file they download.
    c) Mixed: These would be challenges where we require a server, and we also need to give the participants some files, for example, the source code.
  • We had to think of a way to identify which challenge falls under which category, so that they get deployed accordingly.
  • We decided on the flag format early on with the help of a regex (/^csictf{[\w_!@#?$%\.'"+:->]{5,50}}$/) so that we do not have to spend time changing flags later on.

Directories in Detail

Keeping the aforementioned things in mind, we decided that each category would have its own folder. Inside each of these folders, every challenge would have its own folder.

Challenges in the Web category.

Inside each of these challenge folders, there would be the source code, and a challenge.yml file which specifies how the challenge is to be deployed.

A challenge folder.

The challenge folder would also have a README.md file that has the write-up for a challenge so that the official write-ups can be made available as soon as the CTF ends. Besides, for challenges that need to be hosted, there is a Dockerfile which describes how the challenge container will be built.

Finally, we decided on the categories, as listed below:

Categories of challenges in csictf 2020.

My team contributed by creating a fork of the repository and sending Pull Requests to the main repository. These PRs were reviewed, edited, and merged. As soon a PR was merged, an automated script would add it to the challenges on the website (we used CTFd), and another would deploy it on the kubernetes cluster running on a gcloud server. We discuss CTFd and the kubernetes cluster in different articles.

Containerize your Challenges

Containerization involves bundling an application together with all of its related configuration files, libraries and dependencies required for it to run in an efficient and bug-free way across different computing environments. The most popular containerization ecosystems are Docker and Kubernetes.

It is a good practice to containerize your challenges. This will ensure that if a challenge works on your computer, it will work on the server. This will also account for having to use different versions of libraries for different challenges. Also, if someone were to obtain remote code execution, they would be running code inside the docker container, so they would not be able to cause harm to the server. Additionally, dockerizing your challenges would make it easier to apply resource constraints, so you can prevent a rogue high CPU-using process from killing the server (you could also do fancy auto-scaling of containers on your cluster).

In csictf, we had four types of hosted challenges:

  • Pwn (Compiled Binaries)
  • Web (Node.js, Flask or PHP)
  • Jail (Python)
  • Linux

Therefore, we made template Dockerfiles for each category and added them to the challenges with minor modifications as required.

As an example, I will go over how we containerized Pwn challenges. In fact, all challenges that require you to connect through netcat and execute a binary (like reversing in our case) can be deployed in the same manner.

We require the following three files:

  • Dockerfile
  • ctf.xinetd
  • start.sh

Dockerfile

The Dockerfile uses ubuntu:16.04 to host the challenge. Inside the container, we install lib32z1 and xinetd. We create a /home/ctf directory and a user called ctf, and move all the required binaries from lib* and /user/lib* to the ctf directory. This is done so that we can later chroot into this directory.

ctf.xinetd

When you set up a netcat server using nc -lvp 8000, it sets up a listener on port 8000. However, only 1 user can connect to this netcat server at a time. Therefore we use xinetd, which allows multiple netcat connections simultaneously, and kills the processes once the connection is closed. We need a configuration file for running our binary using xinetd, which is as follows:

The server in the config file will be run as root, as specified in the user field. It is running as root and not as the ctf user we created since ctf will not have permission to run chroot. Therefore, the — userspec=uid:gid is crucial. As you can see, the server is used to chroot to /home/ctf and execute the compiled binary called global-warming present in that directory. There are also other options such as per_source, rlimit_cpu, etc. as shown in the config file above.

start.sh

The purpose of this file is to start the xinetd process. As you can see in the Dockerfile, this is the CMD that starts the docker container.

There are some things that you should add to your containers. For challenges with potential RCE (remote code execution), someone might plant a fork bomb in your container, which will crash your server. This can be prevented with the help of tools like nsjail. You could also limit the number of processes that a user on the server can execute by configuring the /etc/security/limits.conf file.

Testing your challenge container

To test your challenge container, you need to build your container, and then use netcat to connect to it. To build your container, execute the following command:

docker build -t <challenge-name> <path-to-challenge-directory>

Note that you can’t have spaces or other special characters (except -) in your <challenge-name>. So, if your pwd is the challenge directory, and the challenge is called global-warming, you can run:

docker build -t global-warming .

Now, once you built the container, you have to run the container. You can do this using docker run.

docker run -p <external-port>:<container-port> <challenge-name>:latest

Here, the <external-port> represents the port on your computer (or the port to be exposed by the server) and the <container-port> represents the port exposed from inside the docker container. For global-warming, you can run it using:

docker run -p 3000:9999 global-warming:latest
Running your docker container

P.S. You can also run it as a background process using the -d option.

Now, all you need to do it connect to port 3000 using netcat.

nc localhost 3000
Connect to your container using netcat

Now, when you are trying to stop your docker container, you might notice that Ctrl + C does not work (closing the terminal does not stop the docker container). This is because we ran sleep infinity in start.sh and did not define any --detach-keys. So, to stop your container, you can run the following command:

docker stop $(docker ps -q)
Stopping your docker container

This will basically get the id of the latest container that’s running, and run docker stop on it. You can stop all containers too, by running docker stop $(docker ps -aq) .

Bonus Commands

To view details about all containers running on your computer:

docker ps -a

To log in to your container (You can also get the container ID directly and replace $(docker ps -q) with that):

docker exec -it $(docker ps -q) bash

To view logs of a running container:

docker logs -f $(docker ps -q)

More containers!

Here are links to samples of other categories of hosted challenges:

Automate container deployment using ctfup

Keeping this directory structure in mind, we also built a tool we call ctfup which would be used by the CI/CD to deploy the challenges on the cluster.

This tool is used by our CI/CD for building docker containers and pushing challenges in the hosted and mixed categories onto our challenge servers. You can find details about how ctfup works and how we made it in an upcoming article.

To be continued…

This article is the first of many articles on the infrastructure of csictf 2020. In this introductory article, we just discussed how we set up the challenges repository. We mentioned some tools we used, such as ctfup, GitHub Actions, Kubernetes clusters, etc. but we did not elaborate on how they work or how we used them. There are some articles dedicated to the usage of these tools. They will also demonstrate how this structure of the challenges challenge repository eased the process of automatic deployment.

--

--