enshot oDeploy your Rust microservices to the world : an end-to-end approach to modern CI/CD and deployment

Arthur Depasse
8 min readApr 4, 2020

--

Photo by Nikolai Justesen on Unsplash

Introduction

Lately I decided to give Rust microservices a try. I had already tinkered a lot with Flask or Express — and I still am perfectly happy to use them in most of the cases — but I felt like I needed to learn a new way of doing things to deal with critical microservices that need performances.

I gave a try to Rust because it is a compiled language that’s most of the time on par with C++ regarding performances, but also comes with an ecosystem tailored to modern development workflows and libraries that enable powerful http servers. I know that most of your services won’t benefit spectaculary from using Rust over using Express or Flask — and in fact it will probably bring more hassle to the development process — but it might be worth the time for several points:

  • Resilient and performant services that are, in my opinion, easier to maintain than non strongly typed languages.
  • For people running their infrastructure in a cloud provider like GCS or AWS, you will likely need less instances to reach the same throughput, meaning lower billing each month.
  • Gain knowledge in a language that is quickly gaining momentum and could become a real alternative in the fullstack technologies list.

The main hurdle I came across while trying to develop such services was the lack of literature on how to handle a real DevOps workflow within Rust ecosystem. As you work here with compilers and binaries instead of interpreters, there are quite some changes to account for regarding the usual workflow. In this article, I’ll propose a workflow to develop efficiently locally, share your work through Gitlab, and automate code checks, test, and builds with Docker. I hope to make this into a series of article about how to deploy your app with Kubernetes on GCS and provision your infrastructure with tools such as Terraform.

This article describes a workflow that works for me and that I find efficient enough for my projects. Keep in mind that you might need to tailor it to your own needs, and that it’s likely to be improvable. Feel free to message me or send PR on my gitlab if you think you can improve it.

Preparation

Let’s start with a tour of the technologies I plan on using. For the Rust framework, I chose Actix-web as it is the most advanced and complete at this point, with a powerful actor model.

We will be leveraging the power of Docker to pack our microservices, test them and deploy them. Particularly, for the local dev, we will be using docker compose as you’ll be able to easily plug in an image for a database like MongoDB if you need to.

For the code versioning and the CI/CD chain, we’ll use Gitlab. I reckon most people prefer Github and that Github actions are available to handle Rust thanks to the awesome community (https://actions-rs.github.io/), but as of now, there’s no Gitlab easy way of doing this and I want to change this. What you’ll read below is the result of bringing together all the small bits I gathered around various forums to build a useful Gitlab CI/CD workflow for Rust.

All the code is available on my gitlab at the following link :

Create our microservice and local workflow

First we need a Rust microservice to tinker with. As mentioned earlier, we will be using Actix-web. Let’s start with the dependencies:

As you can see, we use Actix and listenfd. The last one is a useful crate to use external file descriptors as sockets. You’ll see later why we use this.

Then lets write a basic service:

First, we create the ListenFd from the environment and create a HttpServer with two routes that match the greet function. Then we try to load a tcp listener from our file descriptor. If this is present, we make our server listen on it, else we bind it on a local address using port 5000. As we’ll be listening from the inside of a docker container, it’s best practice to listen on 0.0.0.0. Then we just launch the main loop of the server.

Now we need a way to spin this up locally to test things. First we’ll create a Dockerfile describing an image that contains all the tools needed to run our code:

We basically use the rust 1.42 image and install two cargo binaries, systemfd to create file descriptors and cargo-watch that is the equivalent of nodemon in the node ecosystem (launch a cargo command when a monitored file is changed).

We’ll use cargo-watch to relaunch the web server each time you save some modification to your code. That way, the local development process will be smoother. The issue with this is that sometimes, when relaunching the web server, Actix will try to bind its address:port faster than the OS cleaned the latest bind and you’ll get an error. That is why we use systemfd. We’ll have a single socket always open and the server will bind to this, avoiding having to create a new one each time you restart it.

Let’s take a look at the docker-compose file describing our local workflow:

For now we have a single service, app, that is our microservice. It is based on the Dockerfile presented earlier and will be tagged as “rustservice_dev:0.1”.

We use volumes to mount the src inside the docker and allow cargo-watch to work. We then use the following entrypoint:

systemfd — no-pid -s 0.0.0.0:5000 — cargo watch -x run

We basically create a tcp filedescriptor and bind it to 0.0.0.0:5000, then use it with cargo watch, specifying the run command with the -x flag.

Finally we map port 5000 to outside of the container.

If you need a database, you can easily add another service and use the docker-compose network created by this project to make them communicate.

Now you can spin this up with

$ docker-compose up

and give it a try using curl:

$ curl localhost:5000/Pete
Hello Pete!%

Try to modify the code and save it, you should see the service automatically recompiling and running with your new code. Now we have a local workflow, let’s start working on our Gitlab CI/CD.

The CI/CD on Gitlab

First let’s explain the workflow I chose. We have five stages: check, build, test, publish and deploy:

  • Check stage is used to perform a quick cargo check on every commit that is not tagged, on every branch
  • Build stage is used to build actual binaries, and is run only on tagged commits
  • Test stage is performing a cargo test for each commit
  • Publish stage creates a runtime docker image containing our binary and stores it in the image repository provided by gitlab for any tagged commit.
  • Finally Deploy stage handles two jobs, deploying to staging environment and deploying to production environment. These triggers are manual, only on tagged commits, and the one that deploys on production can only be triggered from the master branch.

For now, we’ll use dummy deploy jobs as we don’t yet have an infrastructure. I hope to tackle this part in the following parts of this series.

Now comes the difficult part. The biggest issue here is to have a build process that can cache its results so that successive builds are quicker. Without any cache management, on a public Gitlab Runner, the project took about 20 minutes to build the code, test it and pack the binaries in a docker image. That is totally unacceptable for such a simple project and we need to bring these times down.

For this we will use the gitlab runner cache and something called sccache. Sccache is a tool developped by mozilla to cache your builds in remote servers and unify your cache accross your builders. Here we won’t use a remote cache server, but the local option to store it on the Gitlab Runner. If you have a GCS Bucket instance or an AWS S3, you can use it to store your cache.

Lets take a look at our .gitlab-ci.yml and explain it :

First we create a yml object called .caching_rust (starting with a dot so that Gitlab CI/CD engine ignores it as a job), that is tagged &caching_rust to let us inject it easily in each job that needs it. This object defines the folders that need to be cached: the folder holding the sccache, and the various folders holding the cargo caches according to the official cargo book.

Then we define our stages and some variables. Note the variable BUILDER: it is the reference to a docker image on dockerhub that I created for my CI processes : https://hub.docker.com/repository/docker/adepasse/rust-ci

This image is simply based on rust:1.42 official image and installs sccache with cargo. That way, it can be cached by the runners and they’ll be able to quickly spin-up a building image containing the tools we need.

The other interesting variable is $DOCKER_DRIVER. As we’ll be doing some Docker in Docker (DinD), we specify the overlay2 driver as it can speed up the process of creating docker images inside docker environments.

Then let’s take a look at the build job, probably the most important :

We base this stage on our custom image, then inject all the caching code using yml << and * operators. In before_script, we export environment variables to set the CARGO_HOME directory, the SCCACHE_DIR, and set sccache as our RUSTC_WRAPPER. Then in the script, we simply invoke cargo build in release profile through its sccache wrapper, print the stats of the cache to monitor its use, and copy the binary to the working directory.

Finally in the artifact section, we declare the binary as an artifact. I used this instead of just caching it to make it available to download if needed but if you just want to persist it for the publish job, simply caching it would be enough.

Now let’s take a look at the job publishing our runtime images :

We’ll use docker in docker, so we specify the service docker:dind and it will automatically use overlay2 as driver thanks to the previously defined variable.

Before the script, we connect to the gitlab image repository using the variables defined by the runner.

Then we just have to build and tag our image using the Dockerfile present in our repo:

This image will just include the binary and start it, on a small debian image. Unfortunately, you cannot use an image based on Alpine linux to reduce further its size because it only comes with musl rust runtime that works only with fully static linked executables and Actix requires dynamic dependancies. You can still hack around to install them on Alpine but I find it easier for maintenance purposes to stay on Debian. We still manage to have an image that is ~28MB once compressed which I find largely sufficient.

Back to the CI, once this image is created and tagged, we simply push it to the registry.

Now let’s commit a few changes, tag them and push them, and then do it again to see our cache speed up dramatically the following builds. Yay !

Screenshot of sccache stats after a cargo build

On a public Gitlab runner, with caching, a simple check and test pipeline takes around 6 minutes and a complete pipeline including test, build and publish takes around 11-12 minutes. That’s a good improvement and I believe it is acceptable for our workflow. If you want to speed it even more, you can either use your own dedicated runner for Gitlab, or merge the test and build steps to avoid the overhead of starting containers twice.

That wraps it for this part, we now have an efficient minimal CI/CD chain and local workflow that we can use for our projects. In the upcoming parts, we’ll see how to actually provision an infrastructure and deploy our service on it.

--

--

Arthur Depasse

MSc Student in Computing and Machine Learning, interested in everything that relates to IT.