Mounting Volumes in Sibling Containers with Gitlab CI

Adventures Running Docker in Docker for Testing

Patrick Winters
5 min readOct 20, 2018

On the frontend team at Bronto, we use semantic-release to automate version management and publishing of our packages. For a number of reasons (including generating links to JIRA issues) we maintain our own semantic release configuration. As we begin to build a GraphQL service, we’ve opted to use lerna to manage a monorepo of packages that include schemas, resolvers, and an Express service. Our attempts at adapting both lerna and semantic-release to a monorepo have been frustrating. We pushed multiple releases of our configuration to test fix after fix within our monorepo, but each build failed for different reasons. We wasted hours. We needed a way to test and verify the configuration.

Taking cues from create-react-app’s in-docker testing [2], we’ve developed a test suite that constructs ephemeral git repositories and runs a fake npm registry inside a docker container. This approach allows us to verify our semantic-release configuration and publish fake packages (including git commits and tags) in a sandbox. Executing these tests in GitlabCI amounts to running Docker in Docker [3], and we encountered a confusing and difficult to debug problem with volume mounting.

Problem

In order to run our end to end tests, we need a Docker container to mount our project as a read-only volume. Within a GitlabCI build, this means that we’re running a docker container mounted with the build container’s working directory. When GitlabCI ran our end to end tests, however, we found the mounted volume to be empty. There were no errors and no logging to indicate any problems mounting the volume, it was just empty.

An example

The following script roughly demonstrates the problem we experienced. When run on a development machine, the test image would correctly mount our project’s directory as a volume and we could see our project files in the container’s directory.

#!/bin/bash
touch example.txt
CONTENTS=`ls ./`
echo "The contents of my host volume ($PWD): $CONTENTS"
read -r -d '' command <<- CMD
CONTENTS=`ls ./`
echo "The contents of container volume (\$PWD): \$CONTENTS"
CMD
docker run \
--rm \
--tty \
--user node \
--volume ${PWD}:/var/myVolume \
--workdir /var/myVolume \
node:8 \
bash -c "${command}"

When running this on a development machine we would see what we expect.

The contents of my host volume (/tmp/dind): example.txt
The contents of container volume (/var/myVolume): example.txt

Within a GitlabCI build, however, the mounted volume would be mysteriously empty.

The contents of my host volume (/builds/ui/lib/semantic-release-config): example.txt
The contents of container volume (/var/myVolume):
Spooky. Doot Doot.

Background

GitlabCI describes three approaches to configuring Docker for use within builds [3]. Since our CI tasks already execute within docker containers, running new containers within a build is referred to as Docker in Docker (dind). Each of the three documented configurations have subtle differences that relate to our presented problem. GitlabCI, for its part, recommends using the special docker-in-docker (dind) image and GitlabCI service.

Docker-in-Docker works well, and is the recommended configuration, but it is not without its own challenges

If you read further, you get to an even chunkier bit that describes mounting directories within child containers. That’s exactly what we’re trying to do!

Since the docker:dind container and the runner container don't share their root filesystem, the job's working directory can be used as a mount point for children containers. For example, if you have files you want to share with a child container, you may create a subdirectory under /builds/$CI_PROJECT_PATH and use it as your mount point (for a more thorough explanation, check issue #41227). [3]

But, “what challenges does this approach present,” you might ask. Well, it requires running docker in privileged mode, which is a non-starter for our DevOps team.

By enabling --docker-privileged, you are effectively disabling all of the security mechanisms of containers and exposing your host to privilege escalation which can lead to container breakout. [3]

Socket Binding

After a bit of conversation with the DevOps team that configures and manages Gitlab and GitlabCI at Bronto, we realized that we’re using the third documented approach, socket binding.

The third approach is to bind-mount /var/run/docker.sock into the container so that docker is available in the context of that image. [3]

As it turns out, this means that docker commands execute on the host, GitlabCI runners. Rather than spawning child containers, we’re spinning up siblings of our build image.

The above command will register a new Runner to use the special docker:stable image which is provided by Docker. Notice that it's using the Docker daemon of the Runner itself, and any containers spawned by docker commands will be siblings of the Runner rather than children of the runner. This may have complications and limitations that are unsuitable for your workflow.

We found the problem! We can’t mount our build directories in the sibling test images because the build directory doesn’t exist on the host runner. We’re simply not running Docker on the filesystem that we expected!

Solution

Docker will allow us to mount volumes from another container using ‘volumes-from’ option [4].

The --volumes-from flag mounts all the defined volumes from the referenced containers.

If we can identify the id of our build task’s container, we can spawn a sibling test image that mounts the same volumes (and, therefore, a copy of our project). With a little bit of scouring, we found the solution on stack overflow [5].

docker ps -q -f "label=com.gitlab.gitlab-runner.job.id=$CI_JOB_ID"

We can also allow our test script to account for both local and GitlabCI environments by checking for a CI_JOB_ID environment variable. With a few tweaks, we can adjust our example script to work both locally and in GitlabCI.

#!/bin/bash
touch example.txt
CONTENTS=`ls ./`
echo "The contents of my host volume ($PWD): $CONTENTS"
if [ -z "$CI_JOB_ID" ]; then
# On a development machine, we'll just mount the package directory
PACKAGE_DIR="/var/myVolume"
VOLUME_OPTION="--volume $PWD:$PACKAGE_DIR:ro"
else
# In a CI build, we'll use --volumes-from but we need to
# identify the job's container id first
JOB_CONTAINER_ID=`docker ps -q -f "label=com.gitlab.gitlab-runner.job.id=$CI_JOB_ID"`
VOLUME_OPTION="--volumes-from ${JOB_CONTAINER_ID}:ro"
# The package directory to jump to in the test container will be
# the same directory/volume that this CI build mounted
PACKAGE_DIR=$PWD
fi
read -r -d '' command <<- CMD
cd $PACKAGE_DIR
CONTENTS=`ls ./`
echo "The contents of container volume (\$PWD): \$CONTENTS"
CMD
docker run \
--rm \
--tty \
--user node \
${VOLUME_OPTION} \
--workdir /home/node \
node:8 \
bash -c "${command}"

That’s it! Now that this problem is solved, we intend to use this docker in docker testing approach for a number of difficult to test projects.

Doot Doot! Woot Woot!

References

[1] Winters, Patrick. “Discipline is the bridge between goals and accomplishment.”

[2] https://github.com/facebook/create-react-app/blob/v2.0.5/tasks/local-test.sh

[3] https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

[4] https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from

[5] https://stackoverflow.com/a/49406631

--

--