Docker: Test Environment

In the last 5 years a lot has changed thanks to technologies that can build isolated test/development environment. However, to establish stable test environment is not a simple task. And if you need to test network component interaction and to analyze the ultimate load level, then the task seems to be even more difficult. By adding the possibility of fast environment deployment and flexible setup of some components, we can get a small and interesting project.

In this article we are going to tell about Docker test environment of our client-server application. At the same time this article is going to be a good illustration of using Docker containers and its related ecosystem.

Problem statement

So, the situation is as follows:

  • Our service is written in Go and it also has client-server architecture.
  • LogPacker can write data in multiple storages so that multiple worker instances can read your data in parallel. This is very important for building test environment.
  • Developers need a possibility to troubleshoot test environment fast and safely.
  • We have to test network component interaction in distributed environment on several network nodes. For that we need to analyze.
  • Traffic flow between clients and servers.
  • We need to control resources consumption and make sure daemon remains stable under high load.
  • And, of course, we want to see all possible metrics in real time and based on testing results.

As a result, we decided to build Docker test environment and its related technologies. This allows us to fulfill our requests and to effectively use hardware resources without necessity to buy separate server for each separate component. In this case hardware resources can be: a separate server, a set of servers or even a developer’s laptop.

Test environment architecture

  • Arbitrary quantity of server instances of our application.
  • Arbitrary quantity of agents.
  • Separate environments with data storages such as: ElasticSearch, MySQL or PostgreSQL.
  • Load generator (we have implemented simple stress-generator, but it is possible to use any other, for example, Yandex.Tank or Apache Benchmark).

Test environment should be easy to scale and support.

We have built distributed network environment with the help of Docker containers, which isolate internal and external services, and docker-machine, that allows establishing isolated test environment. As a result test environment architecture looks like:

For environment visualization we use Weave Scope, because it’s a very convenient and clearly arranged service for Docker containers monitoring.

With the given approach it is convenient to test SOA components interaction, for example, small client-server applications, like ours.

Basic environment establishment

Let’s start with docker-machine, which will allow us to identify test virtual environment. It’s very easy to work with this environment directly from host-system.

Now, let’s create test machine:

$ docker-machine create -d virtualbox testenv
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env testenv

This command establish VirtualBox VM with installed CoreOS and Docker. (If you work on Windows or MacOS, then it is recommended to install Docker Toolbox, which already has it. If you work on Linux, then you have to install docker, docker-machine, docker-compose and VirtualBox manually). We recommend learning more about different possibilities of docker-machine, because it’s a powerful tool for environment management.

As we can see from this command, docker-machine establishes all required components for working with virtual machine. After establishing, virtual machine is started and ready for work. Let’s check it:

$ docker-machine ls
testenv virtualbox Running tcp://

Once we’ve started up the virtual machine, we need to activate access in the current session. Let’s get to the previous step and carefully look at the last line:

To see how to connect Docker to this machine, run: docker-machine env testenv

This is auto setup for our session. By running this command we’ll get the following:

$ docker-machine env testenv
export DOCKER_HOST="tcp://"
export DOCKER_CERT_PATH="/Users/logpacker/.docker/machine/machines/testenv"
export DOCKER_MACHINE_NAME="testenv"
# Run this command to configure your shell:
# eval "$(docker-machine env testenv)"

This is just a set of environment variables, which will report to your local docker-client the way to find server. The last line has a hint. Let’s run the command and see ls output:

$ eval "$(docker-machine env testenv)"
$ docker-machine ls
testenv * virtualbox Running tcp://

In ACTIVE column our active machine is denoted with an asterisk. Note that machine is active only in terms of current session. We can open another terminal window and to activate another machine there. This can be convenient for orchestration testing with the help of Swarm. Anyway, it’s a topic for a separate article :)

Now, let’s check our docker-server:

$ docker info
docker version
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: 0d03096
Built: Tue Aug 11 17:17:40 UTC 2015
OS/Arch: darwin/amd64
Version: 1.9.1
API version: 1.21
Go version: go1.4.3
Git commit: a34a1d5
Built: Fri Nov 20 17:56:04 UTC 2015
OS/Arch: linux/amd64

Note OS/Arch, it always has linux/amd64, because docker-server works on VM, don’t forget about it.

Let’s step back and look inside of VM:

$ docker-machine ssh testenv
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.9.1, build master : cef800b - Fri Nov 20 19:33:59 UTC 2015
Docker version 1.9.1, build a34a1d5

This is boot2docker, but our subject of interest is different. Let’s look at mounted partitions:

docker@testenv:~$ mount
tmpfs on / type tmpfs (rw,relatime,size=918088k)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda1 on /mnt/sda1 type ext4 (rw,relatime,data=ordered)
[... cgroup skipped ...]
none on /Users type vboxsf (rw,nodev,relatime)
/dev/sda1 on /mnt/sda1/var/lib/docker/aufs type ext4 (rw,relatime,data=ordered)
docker@testenv:~$ ls /Users/
Shared/ logpacker/

In this case we use MacOS, as inside the machine there is mounted directory/ Users (analog /home in linux). This allows us to work transparently with files on host-system in terms of docker, so that we can easily turn on and turn off volumes, and don’t care about VM layer. This is very efficient. In theory we can forget about VM, we need it only for docker to work in “native” environment. In this case using docker-client will be absolutely clear. Basic environment is ready, and now we have to run Docker containers.

Setting and running containers

All in all, it meets the Docker ideology: “one process — one container”. That’s why we decided to follow this approach. First we need to run the following configuration:

  • Three containers with server application part.
  • Three containers with client application part.
  • Load generator for each agent. For example, we will use Yandex.Tank and Apache Benchmark with Ngnix, which will generate logs.
  • LogPacker can work in “dual mode”, in other words client and server are located on the same host. Moreover, it is the only application instance, which is working as client and server. We will run it in the container under supervisord control, and in the same container we will run our own load generator as a main process.

So we have our own application executable — it’s only one file, thanks to Golang, with the help of which we can create a universal container for service running in terms of test environment. There are some nuances in the last step when running service in “dual mode”. More details on that we’ll provide a bit later.

Now, we are preparing docker.compose.yml. This is a file with directives for docker-compose, which will allow us to up test environment by only one command:

# external services
image: elasticsearch
image: nginx
- /var/log/nginx
image: nginx
- /var/log/nginx
image: nginx
- /var/log/nginx
# lp servers
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p="
- elastic
- "9995"
- "9998"
- "9999"
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p="
- elastic
- lp_server_1
- "9995"
- "9998"
- "9999"
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p="
- elastic
- lp_server_1
- lp_server_2
- "9995"
- "9998"
- "9999"
# lp agents
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p="
- ngx_1
- lp_server_1
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p="
- ngx_2
- lp_server_1
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p="
- ngx_3
- lp_server_1

This file is standard. First, we run elasticsearch as a main storage, and then we run three instances with nginx, that will perform as load distributors. After that we run server-applications. Note that all upcoming containers are linked with previous ones. In terms of docker-network it allows us to call containers by name. When we review establishment of our service in “dual mode”, we will come back to this issue and take a closer look on it. First container that has server-application instance is linked with agents. It means that all three agents will send logs to this particular server.

Our application is developed in such a way that for adding a new node to the cluster, agent or server needs to report about one existing cluster node. And it will get full information about its system. In configuration files for each server instance we’ll indicate the first node, so that agents automatically can get full information about current system’s status. Eventually after running all system nodes, we will stop that instance. In our case cluster is safe, all system information is distributed between all stakeholders. You should also pay attention to the way of volumes mounting. On nginx containers we indicate manifest volume, which will be accessible in docker-network, while on agent containers we just connect it by indicating server’s name. As a result we get shared volume between load recipients and providers.

Let’s run our environment:

$ docker-compose up -d

Let’s check that everything is working properly:

$ docker-compose ps
Name Command State Ports
assets_lp_agent_1_1 bash -c cd /opt/logpacker ... Up
assets_lp_agent_2_1 bash -c cd /opt/logpacker ... Up
assets_lp_agent_3_1 bash -c cd /opt/logpacker ... Up
assets_lp_server_1_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_lp_server_2_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_lp_server_3_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_ngx_1_1 nginx -g daemon off; Up 443/tcp, 80/tcp
assets_ngx_2_1 nginx -g daemon off; Up 443/tcp, 80/tcp
assets_ngx_3_1 nginx -g daemon off; Up 443/tcp, 80/tcp
elastic / elas ... Up 9200/tcp, 9300/tcp

That’s great, environment is up, works ok and all ports are forwarding. In theory we can start testing, but there are still some unfinished moments.

Naming containers

Final Dockerfile looks like:

FROM ubuntu:14.04# Setup locale environment variables
RUN locale-gen en_US.UTF-8
# Ignore interactive
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y wget unzip curl python-pip
# Install supervisor via pip for latest version
RUN pip install supervisor
RUN mkdir -p /opt/logpacker
ADD final/logpacker /opt/logpacker/logpacker
ADD supervisord-logpacker-server.ini /etc/supervisor/conf.d/logpacker.conf
ADD supervisor.conf /etc/supervisor/supervisor.conf
# Load generator
ADD /opt/
# Start script
ADD /opt/

Load generator is quite simple:

#!/bin/bash# generate random lines
while true
_RND_LENGTH=`awk -v min=1 -v max=100 'BEGIN{srand(); print int(min+rand()*(max-min+1))}'`
_RND=$(( ( RANDOM % 100 ) + 1 ))
_A="[$RANDOM-$_RND] $(dd if=/dev/urandom bs=$_RND_LENGTH count=1 2>/dev/null | base64 | tr = d)";
echo $_A;
echo $_A >> /tmp/logpacker/lptest.$_RND.$OUTPUT_FILE;

Initial script is simple, too:

#!/bin/bash# run daemon
supervisord -c /etc/supervisor/supervisor.conf
# launch randomizer

The trick will be in supervisord configuration file and docker-container start.

Let’s examine configuration file:

command=/opt/logpacker/logpacker %(ENV_LOGPACKER_OPTS)s

Pay your attention to %(ENV_LOGPACKER_OPTS)s. Supervisord can make substitutions to the configuration file from environment variables. Variable is written as %(ENV_VAR_NAME)sand its meaning substitutes to configuration file at daemon’s run.

$ docker run -it -d --name=dualmode --link=elastic -e 'LOGPACKER_OPTS=-s -a -v -devmode' logpacker_dualmode /opt/

With the help of key –e there is a possibility to install environment variable. It will be installed globally inside the container. We substitute this exact variable to supervisordconfiguration file. So that we can manage start keys for our daemon and run it in required mode.

We have got universal mode, even it doesn’t totally meet relevant ideology. Let’s look inside:

$ docker exec -it dualmode bash
$ env
LOGPACKER_OPTS=-s -a -v -devmode
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s

Besides our variable, which we indicate at container start, we also see all other variables related to the linked container, namely: IP-address, all open ports and all variables that were definitely installed at elasticsearch build with the help of ENV directive. All variables have prefix with the exported container name and the title indicating their nature. For example, ELASTIC_PORT_9300_TCP_ADDR means that variable has a meaning indicating container by name elastic and its ip-address with open port 9300. Even though it’s not reasonable to scale discovery-service for setting tasks, this is a great option to get IP-address and data of linked containers. There is also a possibility to use them in your applications that are activated in Docker containers.

Containers managing and monitoring system

$ wget -O scope
$ chmod +x scope
$ scope launch

After command execution, go to http://VM_IP:4040 . This is interface for container managing:

That’s great. Almost everything is ready. The only thing left is monitoring system. Let’s use cAdvisor by Google:

$ docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw  --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --name=cadvisor google/cadvisor:latest

Here http://VM_IP:8080 we have resource monitoring system in real time. We can monitor and analyze the main metrics of our environment such as:

  • Using system resources.
  • Network load.
  • Task list.
  • Other useful information.

cAdvisor interface is presented in the picture below:


All main requirements are accomplished:

  • Full network emulation for network interaction testing.
  • Nodes adding and removal is accomplished by changes in docker-compose.yml and is operated with the help of one command.
  • All nodes can get full information about network environment.
  • Storages adding and removal is operated with the help of one command.
  • System monitoring and managing is possible with browser. It is done with the help of instruments, individually set in containers close to our application that allows us to isolate them from host-system.

Link to all instruments mentioned in the article: