How to deploy a large scale Python application with Docker Swarm?

Emmanuel Hodonou
Analytics Vidhya
Published in
10 min readNov 13, 2019

Docker container technology was launched in 2013 as an open source Docker Engine. It leveraged existing computing concepts around containers and specifically in the Linux world, primitives known as cgroups and namespaces. Docker’s technology is unique because it focuses on the requirements of developers and systems operators to separate application dependencies from infrastructure.

Docker swarm is a quite addition to Docker. It’s designed to easily manage container scheduling over multiple hosts.

  • Main point: It allows to connect multiple hosts with Docker together.
  • It’s relatively simple. Compared with others solutions, starting with Docker Swarm is really easy.
  • High availability — there are two node types in cluster: master and worker. One of masters is the leader. If current leader fails, other master will become leader. If worker host fails, all containers will be rescheduled to other nodes.
  • Declarative configuration. You tell what you want, how many replicas, and they’ll be automatically scheduled with respect to given constraints.
  • Rolling updates — Swarm stores configuration for containers. If you update configuration, containers are updated in batches, so service by default will be available all the time.
  • Build-in Service discovery and Load balancing — similar to load balancing done by Docker-Compose. You can reference other services using their names, it doesn’t matter where containers are stored, they will receive requests in a round-robin fashion.
  • Overlay network — if you expose a port from a service, it’ll be available on any node in cluster. It really helps with external load balancing.

What will we see in this article ?

  • We will form a cluster of three machines with Docker Swarm
  • We will show how to launch a docker registry
  • We will tag and build a Python application image, then push it in our registry
  • We will deploy a stack and show some useful swarm commands

Pre-conditions:

— Machines with linus debian distribution installed
— Docker installed on each machine
— Machines are seeing between them and can send and received packets

What you should know before starting to setup a swarm ?

— In you cluster, you can have one, two, three or several servers.
— Each machine is called a node in your cluster.
— To take advantage of swarm mode’s fault-tolerance features, Docker recommends you implement an odd number of nodes according to your organization’s high-availability requirements
— To set up a swarm, you need to have managers nodes/machines and workers nodes/machines.

The manager nodes handle cluster management tasks:

  • maintaining cluster state
  • scheduling services
  • serving swarm mode HTTP API endpoints

The Worker nodes don’t participate in the Raft distributed state, make scheduling decisions, or serve the swarm mode HTTP API.

— An N manager cluster tolerates the loss of at most (N-1)/2 managers.
Example: A three-manager swarm tolerates a maximum loss of one manager.
— You can create a swarm of one manager node, but you cannot have a worker node without at least one manager node. By default, all managers are also workers. In a single manager node cluster, you can run commands like docker service create and the scheduler places all tasks on the local Engine.

1- Form a cluster of machines.

For this example, i am using 3 machines.

See bellow my machines roles and IP(All the three machines will be managers):
My manager one has the IP 10.10.25.165 with the hostname serveur17
My manager two has the IP 10.10.25.166 with the hostname serveur18
My manager three has the IP 10.10.25.167 with the hostname serveur19

a- Open three terminal sessions and ssh on the three servers.

ssh root@machine_IP and enter password.

On terminal 1

$ ssh root@10.10.25.165

On terminal 2

$ ssh root@10.10.25.166

On terminal 3

$ ssh root@10.10.25.167

b- Set up your machines hosts
Repeat this operation on 3 machines:
— Edit host file on each server

$ vim /etc/hosts

— Add these 3 lines in the host file of each machine and save it

10.10.25.165 dockernode1 
10.10.25.166 dockernode2
10.10.25.167 dockernode3

c- Ping machines between them to see packets sent and well received
From dockernode1 machine (10.10.25.165), ping others managers dockernode2 and dockernode3

$ ping dockernode2 
$ ping dockernode3

From dockernode2 machine (10.10.25.166), ping others managers dockernode1 and dockernode3

$ ping dockernode1 
$ ping dockernode3

From dockernode3 machine (10.10.25.167), ping others managers dockernode1 and dockernode2

$ ping dockernode1 
$ ping dockernode3

Possible issues: If packets are not received, please check your firewall settings.

d- Initialize docker swarm and let’s form the cluster

  • Go on the dockernode1 machine to initialize our swarm
$ docker swarm init --advertise-addr 10.10.25.165

You will see some sort of following output:
Swarm initialized: current node (0knoujuvtkoq1pg3mhjhxsbhn) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join — token SWMTKN-1–3g85y47bq6w7frwqq7gnpxqg85klk02escd5n5i180eo6yiwb1-e46nzdj2oh4yc62p66rogdj7h 10.10.25.165:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

After initializing of our swarm, always on dockernode1 machine, run this command:

$ docker swarm join-token manager

You will see some sort of following output:

To add a manager to this swarm, run the following command:

docker swarm join — token SWMTKN-1–3g85y47bq6w7frwqq7gnpxqg85klk02escd5n5i180eo6yiwb1–4wes668cwnawczst9v32736g6 10.10.25.165:2377

Please copy the output command from this confirmation text. This command will be used to add our other servers to the swarm

  • Go on the dockernode2 machine (10.10.25.166) to add it to our swarm

To add the dockernode2 machine to the swarm as manager, follow the instructions after you run this command:

$ docker swarm join --token SWMTKN-1-3g85y47bq6w7frwqq7gnpxqg85klk02escd5n5i180eo6yiwb1-4wes668cwnawczst9v32736g6 10.10.25.165:2377

You will see some sort of following output:

This node joined a swarm as a manager.

The output tells you the dockernode2 is now a manager.

  • Go on the dockernode3 machine (10.10.25.167) to add it to our swarm

To add the dockernode3 machine to the swarm as manager, follow the instructions after you run this command:

$ docker swarm join --token SWMTKN-1-3g85y47bq6w7frwqq7gnpxqg85klk02escd5n5i180eo6yiwb1-4wes668cwnawczst9v32736g6 10.10.25.165:2377

The output tells you the node is now a manager.

You will see some sort of following output:

This node joined a swarm as a manager.

The output tells you the dockernode3 is now a manager.

f- Check nodes status
$ docker node ls

You will see some sort of following output:

Our cluster is ready…

2- Set up a registry

Before deploying an application in the cluster, we need to put the application docker image on a registry.

The registry is the place to store and tag images for later use.

The main component of a Docker-based workflow is an image, which contains everything needed to run an application. Images are often created automatically as part of continuous integration, so they are updated whenever code changes. When images are built to be shared between developers and machines, they need to be stored somewhere, and that’s where a container registry comes in.

Developers may want to maintain their own registry for private, company images, or for throw-away images used only in testing.

a- Which docker registry can you use ?

First possibility
You can build your own private registry on your server.

With this choice you should maintain the server used to build the registry and be sure the service won’t go down.

Second possibility
You can use the online services as Docker Hub Registry or GitLab Container Registry to store your docker images.

With the choice of these online registry services, you don’t need to maintain a server. These online services also give you the possibility to set up a private or public registry.

b- How to create a private registry on your own server ?

In my example, i set up the registry on one of the machine (10.10.25.165).

— Start the registry as a service :

$ docker service create — name agile-registry — publish published=5000,target=5000 registry:2
  • Check the registry status
$ docker service ls

You will see some sort of following output:

c- How to create a registry on docker Hub and Gitlab ?
In this example, I’m not talking about creating a registry on Hub or Docker Gitlab, but you have a lot of online tutorials dealing with the subject.

3- Let’s build a python example application image

We will create and build the app on the dockernode1 machine and tag the registry

— Create a directory for the project:

$ mkdir stackdemo
$ cd stackdemo

— Create a file called app.py in the project directory and paste this in:

from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
count = redis.incr('hits')
return 'Hello World! I have been seen {} times.\n'.format(count)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000, debug=True)

— Create a file called requirements.txt and paste these two lines in:

flask
redis

— Create a file called Dockerfile and paste this in:

FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

— Create a file called docker-compose-dist-app-registry-build.yml and paste this in:

version: '3'

services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"

The image for the web app is built using the Dockerfile defined above. It’s also tagged with 127.0.0.1:5000 — the address of the registry created earlier. This is important when distributing the app to the swarm.

— Build the app image

$ docker-compose -f docker-compose-dist-app-registry-build.yml build

4- In the previous step, our application image was built, let’s put it now in the registry.

Always in our application folder, let’s execute this command :

$ docker-compose -f docker-compose-dist-app-registry-build.yml push

You will see some sort of following output:
Pushing web (127.0.0.1:5000/stackdemo:latest)…
The push refers to a repository [127.0.0.1:5000/stackdemo]
5b5a49501a76: Pushed
be44185ce609: Pushed
bd7330a79bcf: Pushed
c9fc143a069a: Pushed
011b303988d2: Pushed
latest: digest: sha256:a81840ebf5ac24b42c1c676cbda3b2cb144580ee347c07e1bc80e35e5ca76507 size: 1372

5- Let’s deploy the stack on our swarm

Always in our application folder,

— Create a file called docker-compose-dist-deploy.yml and paste this in:

version: '3'

services:
web:
image: 127.0.0.1:5000/stackdemo
hostname: '{{.Node.Hostname}}'
ports:
- "8000:8000"
deploy:
mode: replicated
replicas: 6
restart_policy:
condition: on-failure
redis:
image: redis:alpine
ports:
- "6379:6379"

visualizer:
image: dockersamples/visualizer
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]

— Create the stack with docker stack deploy:

$ docker stack deploy — compose-file docker-compose-dist-deploy.yml stackdemo

You will see some sort of following output:

Ignoring unsupported options: build

Creating network stackdemo_default
Creating service stackdemo_web
Creating service stackdemo_redis

— Check that it’s running with :

$ docker stack services stackdemo

You will see some sort of following output:

— Check our app is running on our nodes:
Thanks to Docker’s built-in routing mesh, you can access any node in the swarm on port 8000 and get routed to the app:

$ curl http://10.10.14.165:8000

The output:

Hello World! I have been seen 1 times.

$ curl http://10.10.14.166:8000

The output:

Hello World! I have been seen 2 times.

$ curl http://10.10.14.167:8000

The output:

Hello World! I have been seen 3 times.

— Use the visualizer service created to check the nodes and see how your application replicas are shared across the three servers.

You can shut down any machine to check if your application is still available on other available machines.

I turned off the first 10.10.25.165 server and my application is still available on the others nodes and the count continues.

$ curl http://10.10.14.167:8000

The output:

Hello World! I have been seen 4 times.

$ curl http://10.10.14.168:8000

The output:

Hello World! I have been seen 5 times.

6- To bring the stack down use :

$ docker stack rm stackdemo

7- To remove registry :

$ docker service rm registry

8- To Remove a node from swarm

$ docker swarm leave — force

Node left the swarm.

If you want to go further in the configuration

Below my top-level architecture that summarizes the path of a user query:

I use the following elements:

- a domain name
- at least two public IPs
- a load balancer

1- I create a multiple type A DNS record :

www.example.com ==> Public IP 1

www.example.com ==> Public IP 2

www.example.com ==> Public IP n

2- I configure a load balancer:

I add my n public IP addresses to my load balancer. This configuration depends on the load balancer you use in front of and your web hosting.

To summarize, below the journey of a request made by a user

  • The user sends a request to the domain www.example.com
  • Since we have registered multiple DNS records for the same domain with multiple IP addresses, the DNS will choose thanks to Round Robin algorithm an available IP address to route the request to the load balancer. This makes the solution more available already at the DNS level because multiple IP addresses support domain requests.
  • The load balancer listens and redirects requests from these public IP addresses to Docker swarm.
  • Docker swarm chooses from the Docker nodes the available node that will respond. Even when one of the nodes is not available, for example 10.10.25.165, the application continues to function normally without problems thanks to the other two Docker nodes.

In a web browser, the user knows only the domain name. Thus, even if some Docker nodes are lost, the application will continue to function normally.

Thanks for reading. You can add comments if necessary to leave your appreciation. I will take your comments into consideration to improve this post.

What’s next ?

I will show in another article how to scale Redis and how to monitor our stack and automatically trigger alerts (mails, SMS) in case of failure of one of our nodes.

--

--

Emmanuel Hodonou
Analytics Vidhya

Software Engineer and Professional Scrum Master aspiring to become an AI engineer