An alternative Docker workflow

Nickolai Belakovski
4 min readNov 28, 2019

The purpose of this article is to propose a couple ideas for more seamlessly using Docker in projects that involve compiled codebases, like C or C++. These are some ideas that I came up with as I got frustrated with some of the overhead imposed by Docker, such as not having access to all my apps in a container, and having to spin up a container or a new shell into a running container anytime I wanted to do something simple like just launching my executable or examining some aspect of a running executable, like resource usage or log files.

I certainly appreciate the reproducibility of Docker environments, and I think Docker containers are great for scaling to many servers to do CI or to run services, but for local development I found it would get in the way quite often and so I ultimately came up with 3 ideas for being able to take advantage of Docker environments, but not have them get in my way for local development.

The first idea was to use Docker to run my build, but execute the artifacts on my machine directly. The second was to craft commands that could be executed in one line in a self-destroying Docker container, so that I could avoid dropping into a Docker shell entirely. The third was to write my Dockerfile with an eye towards using it as a set of human readable instructions for setting up an environment as well as instructions for the docker build engine.

In order to motivate these examples, we’ll consider a simple C “Hello World!” project with a single source file, main.c, and a simple compile command gcc -o main main.c

Use Docker to build your binaries, then run them on the host

This one insight helps a lot with getting the benefits of Docker without some of the drawbacks. By running your binaries on your local machine, there’s no need to configure X11 for any graphical aspects of your application, no need to configure any ports for any networking aspects of your application, and, importantly for me with C/C++ projects, I can use whichever debugger I want.

This does come with the drawback that my local machine needs to be set up to run my executables. This may appear to remove some of the benefits of Docker, however run environments are usually simpler to set up and maintain than build environments. Taking our hello world example, our build environment requires gcc, but our run environment does not, and so we can build our executable in Docker, but run it locally without installing any extra dependencies.

Another benefit of this approach is that sometimes one runs into issues when running an executable in Docker that are purely related to Docker and have nothing to do with the executable. This approach eliminates that possibility.

I do imagine that this idea may seem controversial to some. Why use Docker if you’re just going to install dependencies on your system anyway? Because I want to use Docker for scaling up my CI, but I want my local development environment to be unencumbered, and I’m willing to spend the effort to do that on my local machine.

Use one-shot docker containers

Continuing with our hello world example, we can construct a one-line command to build the executable:

docker run --rm --tty --volume /path/to/my/code:/tmp/code --workdir /tmp/code gcc:latest gcc -o main main.c

I’m happier when I don’t have to drop into a shell within a container. It means less context for me to keep track of.

I imagine that team members who aren’t very familiar with docker would also appreciate having a command that can build the system without having to drop into a shell.

Write your Dockerfile like it’s a script to install dependencies

Most Dockerfiles are relatively straightforward (although I’m sure some of you are reading this like… straightforward? Sure, our Dockerfile is really straightforward), so this one isn’t too tough to follow. The idea is that the Dockerfile should be written and commented such that I could go through it and copy-paste the commands into my terminal, maybe skipping some ones that are only needed for building, since we use one-shot containers to build, and end up with an environment in which my executables can run as expected.

Review

To recap these ideas, the most important one is to use docker to build your executables, but run them natively. This gives you more freedom over debug tools, reduces the number of steps to get your executable running, and eliminates issues caused by running inside of docker. The second idea was to use self destroying containers with a single command to avoid having to drop into a docker shell, saving ourselves some mental context and hopefully making it a little bit easier to use docker by virtue of eliminating the prompt. The last idea was to take care in the writing of the Dockerfile, so that it can be used as a set of instructions not just for the docker build engine, but for a developer trying to setup their machine.

These ideas work well for me, but won’t apply in all cases, particularly in webdev projects where you may need to use Docker to set up database clients and other services. But I’d love to hear your thoughts, to hear if these ideas carry any currency in your environments, and to hear if you have found other ways of using Docker to enhance your workflows!

--

--