Make It Real Elite — Week 5 and 6: Serapis, a code evaluator API using Go, PostgreSQL, RabbitMQ, Docker, Helm and Kubernetes.

Today we’ll talk about a new project that held me busy the past two weeks. With no further due, let’s dive into what Serapis is and does.

I’ll start off by giving you the following example. Think of Make It Real Camp. This is a platform offering online courses to the students enrolled. This also means that an entire system is acting backstage, and is checking all the solutions that the students are coding during the program for each challenge, and that’s why a code evaluator API is one of the most important features from the Make It Real Camp online platform.

Before starting to dig into this new learning, I had myself a lot of questions, as for instance: how does it actually work, how can a code evaluate another code?, Does it all happen in the same machine?. And as you probably guessed, these primary questions gave birth to others, and I started to wonder what’s the deal with the dependencies and the repositories fetches? And if my own code is waiting for input, how can I run stdin input into the code to be evaluated, etc?

But, bare with. I’ll walk you through each one of them.

How did it all start? It’s been two weeks since our last delivery (the Search Engine project), but it was time to role again with a new challenge received from Make it Real Camp.

I walked into the challenge of developing Code Evaluator API using Go. I also received a series of requirements with the aim to build a proof of concept using a set of specific technologies.

Honestly, it’s been a rollercoaster, but here it is in one article.

Design

This new project was focused on creating an application with the ability to receive requests through an API.

Each request should contain a specific language, code snippets or GitHub repository, stdin input and dependencies. Once the API receives the request it should response to the client with the result after running the entire pack. For the beginning of this POC, our goal was to support Node and Ruby requests. You can check the complete list of requirements here. Also, for some reason, it came naturally to me to call this project “Serapis”, same as the Greek god of the learning.

Below I added the image of the system design, for a better visualization of it. Further, we’ll touch upon each of its components.:

Serapis code evaluator, system design and architecture

Let’s talk about each component on this system design:

  • HA Proxy Load Balancer: This is the entry point for the system. API requests are coming over to the load balancer, which will be in charge to assign each request to a Serapis API server instance. Since we deployed Serapis using Kubernetes we won’t look much into this part of the project for now.
  • API Tier — MQ Publishers: Each machine on this layer will run the API server. It consists of a Go application — this application will receive each request and will create a new record into the Database with initial params, and then a message with the information to run the process will be published to the MQ.
  • Message Queue: We will use RabbitMQ for the message queue, It will receive the message from the API Tier as we mentioned before and will assign each request(or evaluation) to one instance of the Serapis Evaluator from the Evaluation tier. We will take advantage from a RabbitMQ technique for RPC(Remote Procedure Calls) in order to communicate the API and the Evaluator. If you’d like to learn more about this, click here.
  • Evaluation Tier — MQ Consumers: Here we are at what I think is the most important part of the project. When a request arrives to an instance from the evaluation tier using the information from the request like the language, code or repository, stdin input or dependencies, will create a new container with those requirements and will run the evaluation code into it. Once the container finishes the evaluation process, the Serapis Evaluator instance will update the record status, exit code, and output on the database.
  • PostgresQL — Database: Since we are running into a data model with two tables — “Users” and “Evaluations” — and we have relations between them, a SQL database will work perfect for this proof of concept.

Building the API:

I chose to start coding this part of the project, but it was probably a mistake, because, now on a second thought, I could have invested better into working out the evaluator service and the containerization using Docker, I think it was the most difficult part of the project.

Create simple Ruby evaluation requests — Request body and response

During the past years I have been coding most of my time using Ruby and Ruby On Rails as main frameworks to build WEB applications or APIs.. Here’s also why building an API with Golang is different than building it with Rails. Firstly, you have to be more explicit and to take care of every detail of the request path like the routes, handlers and middle-wares (let’s say you want to check headers or control authentication for each request). Further, if you want to know what is happening in your server, you will have to code your own logger to track the information from each request. Additionally, you have to built SQL statements to create the tables, and to make the CRUD operations with each model.

There is no magic as you would probably find with Rails or other frameworks. And, honestly, I missed a bit of this magic, since I didn’t feel as productive as I wanted, but on the other hand, I learned a lot over all theses concepts. More importantly, right now I have a wider perspective in relation to why and how we can work it out with Go, and believe me, it is worth it.

You can check the code for the API service here, feel free to leave any comment or question in the response section, also, any PR o contribution is welcome.

Building the Evaluator:

As I mentioned before, this is the core part of the project. I have been working and coding with Docker into a containerized environment but there was an extra point here — and that is to run docker-in-docker.

Based on the information from the given request, we had to: 1) build a container with the specific conditions e.g. starting from the specific image according to the request language, install dependencies or clone GitHub repository if needed, and 2) run a file with the language extension when any extra resource wasn’t needed or alternatively a .sh file with a set of commands as the entry point command for the isolated container in order to guarantee all the resource for the code evaluation.

Evaluator chart logs from Kubernetes dashboard

Before we move forward, be aware of some potential pains I faced. Firstly, I had to cover the sharing of the Docker Engine from the host machine in order to run the cluster locally with Kubernetes, communication for services like Postgres and RabbitMQ. Secondly, I also had to mount volumes (It was a really huge pain, but we made it).

Here is also the solution, though. All the issues above were covered using the official Go SDK for Docker from the Moby Project. You can pull images, create and remove containers, start containers, mount volumes, wait for containers, attach to containers if stdin input was given into the request, and capture logs using this client (all of the above were used into the Evaluator service). You only need to have a little bit of patience for searching on Google and pay full attention to the documentation from the official GoDoc page page.

You can check the code for the Evaluator service here, feel free to leave any comment or question in the response section, also, any PR o contribution is welcome.

Message Queue using RabbitMQ:

A message queue was implemented since we wanted to support synchronous and asynchronous requests. Thus, every request to the API creates a new record on the Database and publishes a new message into the queue to be consumed by the evaluators.

For the synchronous case, we use RabbitMQ RPC calls from the API to the Evaluator. RabbitMQ has interesting features like messages acknowledgments, persistence and reprocessing in case of failure.

The communication between the two services was possible by creating a general RPC queue to publish the messages from the API and running into the new evaluation from the Evaluator. Further, an unique queue per client is created for waiting until the Evaluator finishes with the process and sends back a response through the the API using this queue. The API uses a correlation ID to validate that is the correct response. And here we are. That’s how we know the evaluation is done and we can send back a response to the user.

I’m sharing a code snippet from the API publishing process, but I recommend you to review the broker package from the API service.

// Create unique queue per client to receive response
q, err := ch.QueueDeclare(
"", // name
false, // durable
false, // delete when usused
true, // exclusive
false, // noWait
nil, // arguments
)
failOnError(err, "Failed to declare a queue")
// Consume from unique queue when the response is done
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
failOnError(err, "Failed to register a consumer")
corrID := randomString(32)
// Publish new request
err = ch.Publish(
"", // exchange
"rpc_queue", // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
ContentType: "application/json",
CorrelationId: corrID,
ReplyTo: q.Name,
Body: []byte(strconv.Itoa(id)),
})
failOnError(err, "Failed to publish a message")

# Validate id an break when response is received
for d := range msgs {
if corrID == d.CorrelationId {
res, err = d.Body, nil
failOnError(err, "Failed to convert body to integer")
break
}
}

Kubernetes deployment:

Kubernetes is one of the trending containers orchestration tools for today, so I decided to take further look into it and to try to run the entire Serapis project into a Kubernetes cluster locally, using Minikube.

I also used Helm the Kubernetes package manager. First, I created two Helm charts, one for the API and a second one for the Evaluator, both with the ports, volumes and images configuration for each of these projects. I used the official Helm Postgres and Helm RabbitMQ packages.

Helm charts list — Kubernetes local deployment using Minikube
Kubernetes local deployment — Dashboard

I went through this learning by following this tutorial for the Go app deployment with Kubernetes and Helm. Finally, I can deploy the cluster and run it locally with all the services working successfully.

I hope to come later with some benchmarks about performance and my conclusions after learning about how to scale on each layer.

My next goal is to learn how to scale on each layer, so I could come back with some benchmarks about performance. The past weeks have also taught me the importance of staying focused on, and dedicated to this kind of challenges.

I truly hope you have learned and also enjoyed going through my articles. If you want to check more about the Serapis project, you can visit the repository. Don’t forget to check also the readme file. I would appreciate any comment, feedback or contribution. Till next time, cheers!