Docker for C++ build pipeline

Pipe Runner
Project Heuristics
Published in
8 min readApr 17, 2020

Recently I found myself engaged in one of the many side projects that I keep doing from time to time. But this time I was looking for something challenging and had a little more enthusiasm for this particular project.

So long story short, we were reinventing an IoT ecosystem where I would have a hub (central control unit), nodes (the devices that will be controlled) and a phone app (using which you would manipulate the ecosystem). We had a great start but were not able to pull it through as my fellow coders got engaged with other stuff and the project ideas started to outgrow the team size. As of now, it is on a perpetual hold until someone comes along to help. Coding alone is very tiring and very mentally challenging. Feel free to check it out.

A silver lining

Even though my dream did not come true but I sure got to learn a couple of things:

  • Don’t do long projects if you don’t have a team dedicated to it
  • A deeper insight into CMake (and boy it’s wonderful)
  • Using Docker as a build pipeline for C++ projects

So the last point is what you guys are here for, right? Let’s dive into it.

Why?

What are our other options? Installing all libraries natively and running a build pipeline? Only if we had a good library management system like NPM or PIP, this could have been easy. There are a few that are simply “try hards” and to be honest, I don’t think C++ library management is going to be as neat and simple like NPM and PIP. Installing libraries natively is not that bad until you end up using different operating systems as your build environment. In our case, I was using Arch and Ubuntu depending on my workstations, while my friend was using macOS. This immediately got us into problems. The libraries did not match, with their versions being different. Now we could always get our hands dirty but would a newcomer want to get in the dirt with us? Highly unlikely.

So we thought of making use of a well-known buzzword — Docker.

The following section would demand you to be a little well versed with Docker jargon so I would suggest you all go through this as fast as you can:

Just get a feel and don’t waste too much time on this.

Clean and Simple Images

So by now, you should know three things —

  1. Docker Images
  2. Docker Containers
  3. Docker Hub

Here’s a flowchart that we will be following closely.

Flow chart for build pipeline — Radlet

With this in mind let’s now make clear segregation between Env Image and a Dev/Prod Image.

1. Env Image

So the idea is that you would want to have an Environment Image that will be built on top of an official base Image (probably the OS and Processor architecture you are targeting). This Environment image will contain all the libraries you want to use in your project. Logically you would only update this Image when you add or remove a library from your project. Also, building libraries from source takes a huge amount of time, so you won’t want to repeat this process every time you wish to try out a new modification in your code. In our project, we use this command to build our Env Image.

docker build -t radlet/radlet_dock.env:x64 -f ./Docker/x64/env.Dockerfile .

This will take a good deal of time depending on whatever library you are trying to install/build from source.

Let’s take a look at this command and try to understand what’s going on:

  • -t is for tagging the image with a name — radlet/radlet_dock.env:x64 — this gives us a clear idea of what this image is. radlet/ is the name of the repo I have created on Docker Hub. This tag is used to push the image to cloud and reuse it without having to rebuild it from scratch if you don’t happen to have it locally. radlet_dock.env is the actual name and x64 is the architecture this image is expected to run on — has no functional relevance, just added it to make things clear for people using the image.
  • -f allows you to specify a docker file. We have multiple docker files in our repo in a well-structured directory, so we use this flag to point to the right docker file. Each image type in each architecture has a different DockerFile to keep things simple and clean.
  • Lastly, the ‘.’ that points to the root directory of the project. So I usually run this from the project root, thus ‘.’ makes sense in my case.

Now, let’s peek into the DockerFile.

Trivial stuff here but a few noteworthy points —

  • The base image for our env for x64 bit architecture is Ubuntu. So whatever works on Ubuntu works here.
  • We have a bash script to make organising library installation steps easier, so we just copy this script in the image and install the libraries using the script rather than writing the installation steps directly in the docker file.
  • Also, pay attention to the change of the working directory and the cleanup steps that are included in the DockerFile.

2. Dev Image

Now when we have our Environment Image ready you would use this as your base Image for the rest of your pipeline for creating Dev/Prod Images. In our project we use this command to build our Dev Image:

docker build -t radlet_dock.dev:x64 -f ./Docker/x64/dev.Dockerfile .

This is again similar to what you have done in Env. Let’s take a look at the DockerFile:

The noticeable changes are:

  • The base image is now radlet/radlet_dock.env:x64
  • We basically copy our entire source code into the Image and run CMake as usual. Assuming you know how CMake works, this should now feel like home.

To keep things even more simple for our contributors, we have packed the two commands in a bash script and fire them using flags.

In the script, you would see line number 43 and 61 doing the exact things I have mentioned above. So for a contributor to create an Env Image, he would do (while in the project root):

./package.sh -e

while to create a Dev Image, he would do something like this:

./package.sh -b

Note: This step can be used in CI/CD systems as a build test. For instance, I have used it in my project in Github Actions. So whenever we have a pull request, Github actions would initiate a ./package.sh -b command. If the build process exits with a non-zero output (you don’t really have to worry about this) then your build was successful and thus there was no syntax error. Your CI/CD pipeline can proceed further with deployment or whatever you would like it to do.

Running the Containers

Now that your images are ready you would want to run them as Containers so that you could test/run your code. Now you could again simply use docker commands to run the containers and in fact, when we started with the project we did the same. But later on, we had to use other images along with our own and the catch was that we needed them to run in a particular order and did not wish to do it in a dirty/hacky way — enter Docker Compose. I would expect you to go through the documentation on your own, but the gist is that you would write some YML files and Docker Compose will run the containers in a well structured, orderly fashion; giving you much better control on your Container orchestration.

1. Starting the Dev Container

Let’s start by taking a look at our YML file.

The code should be self explanatory but we’ll still go through some crucial points:

  • In my project we want to start two containers — one is influx DB and the other is my Dev Image.
  • Dev Image depends on Influx, which means Influx would be started before Dev Image.
  • Dev Image has some network settings done (network_mode: host)which is specific project requirement — you would want to change that according to your requirements.
  • We would like to open a shell in the Dev Image to check stuff manually (stdin_open: true tty: true). This is again a project-specific requirement. So you can leave this out if you don’t need it, but this is also a feature you may need later on for greater control over your Dev Image testing.

To ask Docker Compose to start the containers using this YML file, you would do something like this:

docker-compose -f ./Docker/x64/dev.docker-compose.yml run radlet_dock

But we went a step further and mapped it to a flag in our bash script. So you would just need to do this:

./package.sh -mt

A bit about Deployment

In our case, we did not have much difference in our dev and deployment images so I did not really go in much details for deployment. But in your case you may want to have loggers disabled, have an optimized code, etc. in your deployment build so you would trim and groom your DockerFile accordingly for Deployment Image. The naming conventions that I have used are something I came up with to keep things organized in the clutter. Feel free to follow it, you’ll keep yourself out of trouble.

Also, in the YML file for Deployment, you would want to turn off your shell access.

And again, to keep things simple, map them to flags in your bash script.

Making the best use of Docker Hub

Docker Hub is basically Github for Docker Images. So if you don’t have an Image on your machine which is available on Docker Hub, Docker CLI will automatically download it for you. So anything that doesn’t change much can ideally be pushed to Docker Hub for safekeeping and availability.

In our case, we would push the Env Image from our machines whenever we would make changes to it. We also configured our CI/CD to push the Deployment builds to Docker Hub that is generated on every successful PR merge on Github. This gave us a very robust and flexible pipeline for C++.

Conclusion

Everything mentioned above has been discovered out of necessity and has been battle-tested. I would be grateful if you carry the torch forward and make amends to our pipeline and make it even more robust. If you have questions regarding the pipeline, feel free to drop it in the comments. I’ll try my best to answer them. Good luck. 😇

--

--

Pipe Runner
Project Heuristics

Software Engineer at Postman | “Coder by profession, Artist by passion” | Stopped writing on Medium and moved to https://piperunner.in/blog