Load balancing docker containers using nginx running on docker

Antero Duarte
6 min readApr 4, 2018

--

In this story you will read about how we can easily use docker user defined networks and nginx to load balance the same docker image running on multiple containers (which can be running on different machines)

Using docker networks, we can define the load balancer container in the same network as the instance containers and then we can use the container name as a reference.

This means we don’t need to know the internal IP address of the docker containers to communicate with them.

Use case: 4 load balanced instances of mock/docker-image

Assuming that mock/docker-image is an image loaded in the machine we are using that exposes a service on port 80

First up, we need to run the instances that we want to load balance.

The template for the command is:

docker run -d --network=<NETWORK> --name <INSTANCE_NAME>-<INSTANCE_NUMBER> <IMAGE>

Where:

  • NETWORK is the common docker user defined network that all the containers share
  • INSTANCE_NAME is a meaningful name that identifies the service running
  • INSTANCE_NUMBER is a number incremented for every container instance of the load balanced image
  • IMAGE is the docker image to use

Optionally, we can add -p:

docker run -d --network=<NETWORK> -p <EXTERNAL_PORT>:<INTERNAL_PORT> --name <INSTANCE_NAME>-<INSTANCE_NUMBER> <IMAGE>

to map an internal port to an external port.

This is not needed for the load balancer to work, because the containers use a docker user defined network to communicate with each other, but it might be useful for testing every instance separately and manually.

So for our use case, this becomes:

Assuming the docker network lb-net has been created before with:
docker network create lb-net

docker run -d --network=lb-net --name mock-1 mock/docker-image
docker run -d --network=lb-net --name mock-2 mock/docker-image
docker run -d --network=lb-net --name mock-3 mock/docker-image
docker run -d --network=lb-net --name mock-4 mock/docker-image

Neat trick:
for i in {1..<NUMBER>}; do docker run -d --network=lb-net --name mock-${i} mock/docker-image; done
Where
NUMBER is the number of containers to run

The result of docker ps should be similar to the following

CONTAINER ID   IMAGE               COMMAND    CREATED              STATUS   PORTS   NAMES
29fdc7bc086f mock/docker-image "/hello" About a minute ago Up 1m mock-4
b2981a5e26d9 mock/docker-image "/hello" About a minute ago Up 1m mock-3
f1fde4c3c506 mock/docker-image "/hello" About a minute ago Up 1m mock-2
f1f0a78aab5a mock/docker-image "/hello" About a minute ago Up 1m mock-1

Creating the load balancer container

Now that all the instances are running in the same docker network, we need to define and configure an Nginx instance that will act as the load balancer

Command template:

(WARNING) Dont run this yet, there’s something we need to do first

docker run --network=<NETWORK> -p <EXTERNAL_PORT>:80 --name <CONTAINER_NAME> nginx

Where:

  • NETWORK is the network defined before for the container instances
  • CONTAINER_NAME is a meaningful name for the load balancer (usually something like {INSTANCE_NAME}-lb)
  • EXTERNAL_PORT is the port you want the service to be accessible from.
  • 80 is the port nginx listens on by default, this is configurable and you could change (would have to change nginx config as well) it, but why?

Nginx is super flexible and there’s about a million different way of achieving the result we want. You can tweak it to your specific use case making it harder/better/faster/stronger. But for the purpose of this tutorial, we’ll keep it simple.

Based on the configuration example in the official documentation, we can see that in a few lines we can create a config for nginx to act as our load balancer.

We’ll change it slightly to split the general server config from the actual load balancer config, just because this is the generally preferred way and it’s how the docker image is laid out.

So we need to put our general server config in /etc/nginx/nginx.conf:

This is where you would change server parameters like max upload size…etc and other parameters you can find in the nginx documentation

nginx.conf

And we need to put our server block (the nginx equivalent to apache virtual hosts) in /etc/nginx/conf.d/default.conf:

conf.d/default.conf

In this config:

  • We define our list of servers as an upstream block that we call servers
  • Note how we can call our servers by their docker container name. This is because docker networks provide us with DNS resolution, which is what makes this solution super neat.
  • We create a server that listens on port 80 and reverse proxies to the upstream block.

Now we need to pass this config to the docker container, and there’s several ways of doing so. The recommended way is to run the docker container with a volume where the nginx config lives. This way we don’t need to go inside the docker container to change configuration and the containers are reproducible.

Read more about docker volumes here

The obvious thing to do is to just mount /etc/nginx/ and have all the config available from the host, but there is a problem with this approach. The way docker handles volumes works a bit like when you plug in a USB stick on a linux machine. It creates a mount point where we specify it.

This means that when we create a volume at /etc/nginx/, the mount will essentially obscure the original /etc/nginx/ directory, which means we have to recreate the whole structure in the host machine, including placing all the default nginx files that we don't really need to worry about otherwise.

Output of tree on /etc/nginx (files we don't need to care about)

Thankfully, there’s a way of doing this. (Bonus points because there are several hacky ways of doing it, but we’ll do it in the recommended way)

Using docker volumes, you can also mount single files. Which means that the following are perfectly valid mounts:

# No surprise here, mount a dir
-v ${pwd}/conf.d:/etc/nginx/conf.d/
# Mount a single file
-v ${pwd}/nginx.conf:/etc/nginx/nginx.conf

So the full running command becomes:

docker run \
--network=<NETWORK> \
-p <EXTERNAL_PORT>:80 \
--name <CONTAINER_NAME> \
-v <RUNDIR>/conf.d:/etc/nginx/conf.d/ \
-v <RUNDIR>/nginx.conf:/etc/nginx/nginx.conf nginx

Where:

  • RUNDIR is the directory that contains your nginx.conf file and your conf.d directory with your default.conf file

(IMPORTANT) If you didn’t have an actual use case and were just going along with the tutorial, now is the time when you actually need containers running as the instances we defined before, otherwise (as you might have noticed if you ran the command above), nginx will throw an error and stop your container.

If you just want to test this tutorial, go back to where you defined the container instancesand change mock/docker-image to nginx, which will create an nginx server with the default html response.

Done and testing

The load balanced services should now be accessible through what you defined as EXTERNAL_PORT when creating the load balancer container.

If you’re sceptical and don’t believe me, here’s a wait to check for yourself:

This assumes that the server you’re load balancing logs a new line for every request.

If it doesn’t, I’m afraid you’ll have to figure out your own way of testing your servers

docker logs -f mock-1 will show you the logs for the mock-1 container.

Run this in a terminal window

If you open a new terminal window and run curl localhost:<EXTERNAL_PORT> you'll get a reply and you should see the log in in first terminal window.

If you run curl localhost:{EXTERNAL_PORT} again, you should still get a reply but you will not see a new line in the logs.

This is because the second server answered to your request.

If you keep running this command, eventually you will get another response from mock-1, which you can confirm by seeing the new line in the log.

Bonus Trick: docker-logs

Docker has a built in command to show logs for a container, docker logs <CONTAINER_NAME>, but it can only show logs for a single container.

For this case, it would be better if we could watch several container logs at the same time. I found a neat answer on stack overflow to this problem, which I've tweaked in my preferred way.

The main differences in my version are:

  • Doesn’t use docker -f, so we don’t have to deal with forking/sub-processes
  • There’s a clear line (literally) separating every log instead of just prepending the container name
  • It only runs docker logs once, meaning you have to use something like watch to keep refreshing the logs (see Usage)

I created a file with the above contents in /usr/bin/docker-logs and ran chmod +x /usr/bin/docker-logs

Usage

docker-logs container-1 container-2 ... container-n 
# OR
docker-logs mock-1 mock-2 mock-3 mock-4

To refresh the logs every 100ms (requires watch)

watch -n 0.1 docker-logs container-1 container-2 ... container-n
# OR
watch -n 0.1 docker-logs mock-1 mock-2 mock-3 mock-4
  • Top terminal: watch -n 0.1 docker-logs mock-1 mock-2 mock-3 mock-4
  • Bottom-left terminal: while true; do curl -s localhost:5000 > /dev/null && sleep 1; done
  • Bottom-right terminal: docker attach {CONTAINER_NAME}

And that’s it, super simple load balanced docker containers using nginx and docker networks.

--

--