Using Docker Volumes for a Multi-Container Test Execution

Prajwal Gowda
TestVagrant
4 min readJun 30, 2023

--

Yes! You read it right! Today, let's go through how to run the tests in two containers and generate a single allure report.

Before diving into it, let’s discuss the objectives of running in multiple containers.

  1. Parallelization: By running tests in multiple containers, you can parallelize the test execution. This allows you to run multiple tests simultaneously, significantly reducing the overall test execution time.
  2. Scalability: Containers are lightweight and can be easily scaled horizontally across multiple machines or cloud instances. This means you can distribute your tests across multiple containers and run them on different machines simultaneously.

Let’s discuss some concepts before diving in — Volumes

What is the need for volumes?

Containers are lightweight and isolated environments that package applications and their dependencies, allowing them to run consistently across different computing environments. Without volume, the data inside the container gets deleted as soon as the container is killed.

What are volumes?

Volumes in Docker are a way to persist and share data between the host machine and containers. They provide a convenient mechanism for storing and accessing data outside of the container’s filesystem.

Let’s start diving into the main parts of the code, you can access all the snippets and entire code used below from this repo

1. Dockerfile :

Dockerfile contains a set of instructions to build a docker image. In the below Dockerfile snippet, maven:3.8.4-openjdk-11-slim has been used as the base image, and other necessary files like pom.xml, testng.xml, src/ folder, etc. that are required to build our image.

2. Docker-Compose:

In the first section, we discussed what is volume and the need for it. Two issues were encountered while writing docker-compose. Let's go through it:

A. Volume Issue

Bind Mount:
A bind mount in Docker is a method of sharing files or directories between the host machine and a container. It allows you to mount a specific file or directory from the host’s filesystem into a designated location within the container.

When tried using bind mount, the data inside the mounted directory used to get rewritten when other containers came to write on the same directory; causing the total tests displayed in the final allure to be incorrect.

Named Volume:
A named volume in Docker is a way to persist and manage data separately from containers. Named volumes offer advantages such as data persistence even if containers are removed or rebuilt and the ability to manage data independently from container lifecycles.

Using Named Volume, the volume was getting shared between the container without overriding the data. (In the above snippet, Named Volume is being used.)

B. Reporting Issue

Traditionally, during running the tests, the allure results were stored /app/allure-results inside the container. Now that multiple containers run the tests using named volume, the following issue occurred:
i) Since the named volume directly accessed the volume data, it was not possible to run the allure servecommand as docker creates a Linux VM inside MacOS. Although the volume can be seen in the docker UI, we cannot directly access the volume from the shell.
ii) The timing of the completion of the process within the container, specifically the test execution was uncertain.
iii) If the tests were run in the same container again, we need to keep track of old and new results.

Solutions:

a. Initially, there was a consideration of writing a shell command that utilizes the docker cpcommand to extract the /app/results directory from both containers and save them in the specified results directory. Afterward, allure servecommand can be run. But to solve ii) and iii) reporting issues highlighted above, polling is required to check the directory and update the results if there is any change.

b. Then we got to know of allure-docker-service which as configuration CHECK_RESULTS_EVERY_SECONDS can be passed and provides other super cool features! Do check out the repo and its features. That is the third service which you can see in the docker-compose file.

3. docker-compose-entry point:

This is the docker-compose entry point shell script

a. In the first part, two arguments are expected to be passed. Each argument should accept either the group of tests to be run or the XML file to be executed. A message is echoed if the number of arguments is less or more than expected.
For example:

./docker-compose-entrypoint.sh  -Dgroups=container1 -Dgroups=container2
or
./docker-compose-entrypoint.sh -Dsurefire.suiteXmlFiles=container1.xml -Dsurefire.suiteXmlFiles=container2.xml

b. In the second part,.env a file is created, and the variables are appended to it. Further information about .env files can be found by referring to the provided link.

c. In the final part, docker-compose is executed in detached mode.

Now let’s run the tests, detailed instructions are specified in the Readme of the repo, do check it out and leave a ⭐️ if you like it 😄.

Future Scope

If you see in the docker-compose, two images are being created with the same code. To avoid this, we can adopt the approach of creating a single image and a replica of the image while spinning up containers. But the requirement was to run different groups of tests in different containers. which implies running different mvn clean test commands for each container. This is tricky to handle in a single service since there is no specific key that can be used to distinguish between the replicas. Although writing a shell script in which we will do docker inspect <continer-id> and grep the unique key and pass the command based on that, but that becomes too tightly coupled to the key. If anybody knows how to handle this, do comment and share your insights.

That’s all for this blog. Thanks for reading!

--

--