DOCKER: LOAD BALANCING

Kelom
5 min readSep 25, 2020

--

credit: www.haproxy.com

What is Load Balancing?

Load balancing is the distribution of network/application traffic across multiple servers. This prevent a single server from been overwhelmed with too much request.

A load balancer sits between client devices and servers, receiving and then distributing incoming requests to any available server.

Benefits of Load Balancing:

1. load balancing improves application responsiveness.

2. It also increases availability of application.

3. security of application.

In this article, we will look at load balancing in Docker. This is by no means a comprehensive article on docker or loading balancing. This is just a simple introduction to load balancing in docker using haproxy.

Prerequisite:

· Ubuntu installed

· Docker and docker-compose installed (read here on how to install docker and docker-compose.)

You can clone the complete project from git: https://github.com/kelom-x/docker-loadbal.git

BUILDING THE IMAGE FOR THE APPSERVERS

After cloning the complete project from git, we can test it by:

Step 1: Changing directory to the folder containing the files(mine is medium-post located on the desktop):

cd Desktop/medium-post/

Step 2: run this command to build the image:

sudo docker build -t greatapp .

CREATING THE CONTAINERS

Still in the same directory or folder, we create the containers by running:

sudo docker-compose up

The containers (1 loadbalancer and 4 app servers) have been successfully created.

TESTING THE LOAD BALANCER

We created the load balancer to listen on port 8080, so lets enter the url in the browser:

http://localhost:8080/

We get a response from the load balancer.

Upon refreshing the browser, we are directed to another app server.

Upon another refresh, we are directed to another app server.

And to another app server.

We have successfully built our load balancer in docker.

OVERVIEW OF FILES

This is my folder structure:

Content of index.js:

const app = require(“express”)();const svrid = process.env.SVRID;app.get(“/”, (req,res) =>res.send(`this is server ${svrid} speaking!`))app.listen(svrid, ()=>console.log(`${svrid} is listening on ${svrid}`))

Index.js is simple node app accepting a parameter/environment variable SVRID and passing it to the server:

res.send(`this is server ${svrid} speaking!`))

So when a value say 2 is passed to SVRID, we end up displaying:

this is server 2 speaking!

in the user’s browser.

And

app.listen(svrid, ()=>console.log(`${svrid} is listening on ${svrid}`))

logs

2 is listening on 2

in the console

Content of package.json:

{
“name”: “app”,
“version”: “1.0.0”,“description”: “”,“main”: “index.js”,“scripts”: {“app”: “node index.js”},“keywords”: [],“author”: “”,“license”: “ISC”,“dependencies”: {“express”: “⁴.17.1”}}

Content of haproxy.cfg

frontend http
bind *:8080
mode http
timeout client 10s
use_backend all
backend all
mode http
timeout server 10s
timeout connect 10s
server s1 appserver1:1
server s2 appserver2:2
server s3 appserver3:3
server s4 appserver4:4

Frontend http: This configures a frontend named http, which handles all incoming http traffic on port 8080, and sends those traffic to the backends named all.

backend all: A backend can contain one or many servers. Adding more servers to your backend will generally increase the reliability and load capacity of the configured service by distributing the load over multiple servers. In this particular instance we have 4 servers

Content of Dockerfile

 FROM node:12
WORKDIR /home/node/app
COPY app /home/node/app
RUN npm install
CMD npm run app

FROM node:12 -> pulling node image from the docker repo.

WORKDIR /home/node/app -> setting our WORKDIR directory to /home/node/app

COPY app /home/node/app -> copying the app folder (which containers index.js, package-lock.json and package.json)on our machine to /home/node/app in the container

RUN npm install -> will install the packages/dependencies defined/listed in the package.json

CMD npm run app -> runs the node app

Content of docker-compose.yml (indentation is very important)

version : ‘3’
services:
loadbal:
image: haproxy
ports:
- “8080:8080”
volumes:
- ./haproxy:/usr/local/etc/haproxy
appserver1:
image: greatapp
environment:
- SVRID=1
appserver2:
image: greatapp
environment:
- SVRID=2
appserver3:
image: greatapp
environment:
- SVRID=3
appserver4:
image: greatapp
environment:
- SVRID=4

We are creating 5 services (loadbal, appserver1, appserver2, appserver3, appserver4)

We are creating a loadbal with the haproxy image from the docker repo, and mapping it to port 8080 on the host machine and attaching a volume to make data persistent

We are also creating 4 appservers using the greatapp image (which we created from our dockerfile) and passing the variables 1,2,3,4,5 to SVRID

There are so many tools/techniques to handle load balancing in docker (such as docker swarm, nginx, etc), this post is just a “simplification” of load balancing in docker.

--

--