In Detail about Docker Swarm

Tasnuva Zaman
Apr 12, 2019 · 9 min read

Docker Swarm is a popular Orchestration solution.

Container Orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. The main responsibilities of container orchestrations are:

1. Provisioning and deployment of containers2. Redundancy and availability of containers3. Cluster management4. Scaling up or removing containers to spread application 
load evenly across host infrastructure
5. Movement of containers from one host to another if there is
a shortage of resources in a host, or if a host dies
6. Allocation of resources between containers7. External exposure of services running in a container with
the outside world
8. Load balancing of service discovery between containers9. Health monitoring of containers and hosts10. Configuration of an application in relation to the
containers running it

Some open source orchestration solutions are: Docker Swarm, Kubernetes, Apache Mesos, OPENSHIFT, Nomad etc.

Container Ecosystem Layers

Architecture of Swarm


  1. A set of nodes with at least one master node and several worker nodes that can be virtual or physical machines.


  1. The tasks defined by a swarm administrator that a manager or agent nodes must perform.
  2. It defines which container images the swarm should use and which commands the swarm will run in each container.

A service in this context is similar to a microservice; for example, it’s where you’d define configuration parameters for an nginx web server running in your swarm. Parameters for replicas also define in the service definition.

Manager node:

The manager node provides several functions after you deploy an application such as:

  1. It delivers work (in the form of tasks) to worker nodes,
  2. Manages the state of the swarm to which it belongs.

Worker nodes:

  1. Worker nodes run tasks distributed by the manager node in the swarm.
  2. Each worker node runs an agent that reports back to the master node about the state of the tasks assigned to it, so the manager node can keep track of services and tasks running in the swarm.


  1. Tasks are Docker containers that execute the commands you defined in the service.
  2. Manager nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another worker. If the task fails in a replica set, the manager will assign a new version of that task to another available node in the swarm.
Docker Swarm Architecture

Create Docker Swarm

Step 1|| Install Docker Machine

Docker Machine is a tool that lets you install DockerEngine on virtual hosts, and manage the hosts with docker-machine commands.

To install docker machine on ubuntu use the following command :

$ base= && curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine && sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Step 2|| Create Docker machine to act as nodes for docker swarm

Create One manager machine and other worker machines

Create manager node:

$ sudo docker-machine create --driver virtualbox manager1

Create two worker nodes:

1. $ sudo docker-machine create --driver virtualbox worker1
2. $ sudo docker-machine create --driver virtualbox worker2

Note: If you encounter docker-machine:Error with pre-create check: “exit status 126” you will have to install virtualbox on your machine.

check created machine list:

$ sudo docker-machine ls#  we have one 'manager'machine ----> manager1 
and two 'worker' machine ------> worker1 and worker2

To check ip address of a specific machine run:

$ sudo docker-machine ip manager1

Step 3|| Connect manager and worker machine from terminal by ssh

Open 3 terminal windows and run following commands to connect manager1 worker1 and worker2 node:

# connect to manager1 node
1. $ sudo docker-machine ssh manager1
# connect to worker1 node
2. $ sudo docker-machine ssh worker1
# connect to manager1 node
3. $ sudo docker-machine ssh worker2

Step 4|| Initialize Docker swarm

Initialize docker swarm on manager node by manager1 IP address running following command:

Note: this will only work in swarm manager not in worker machine

# chek manager1 IP_address
1. $ sudo docker-machine ip manager1
# initialize docker swarm on manager1
2. $ docker swarm init --advertise-addr manager1_ip_address
output will be something like this:Swarm initialized: current node (vq7xx5j4dpe04rgwwm5ur63ce) is now a manager.To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-50qba7hmo5exuapkmrj6jki8knfvinceo68xjmh322y7c8f0pj-87mjqjho30uue43oqbhhthjui add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

run docker node ls to verify manager. It will work only in swarm manager not in worker . You can check by running this command from worker machine in your terminal if you want.

Note: This docker swarm init command generates a join token. The token makes sure that no malicious nodes join the swarm. You need to use this token to join the other nodes to the swarm.

Step 5 || Join worker in the swarm

On both worker1 and worker2, copy and run the docker swarm join command that was outputted to your console by the last command:

$ sudo docker swarm join --token SWMTKN-1-50qba7hmo5exuapkmrj6jki8knfvinceo68xjmh322y7c8f0pj-87mjqjho30uue43oqbhhthjui after running the command you will be prompted "This node has joined a swarm

hooa! You now have a three-node swarm!! verify it by running below command from your manager machine:

$ sudo docker node ls
3-node swarm

Step 6|| Check for more info

On manager run standard docker commands

# check the swarm section no of manager, nodes etc
1. $ sudo docker info

Step 7 || Run or deploy containers(service) on docker swarm

Now that we have our three-node Swarm cluster initialized, we’ll deploy some containers. To run containers on a Docker Swarm, we need to create a service.

Note: A service is an abstraction that represents multiple containers of the same image deployed across a distributed cluster.

Let’s do a simple example using NGINX. For now, we’ll create a service with one running container, but we will scale up later.

$ sudo docker service create --detach=true --name nginx1 --publish 80:80  --mount source=/etc/hostname,target=/usr/share/nginx/html/index.html,type=bind,ro nginx:1.12
# description of flag1. --detach=true: Run this container in the background
2. --name: Gives a name to the container in our case it's 'nginx1'.
3. --mount: NGINX print out the hostname of the node it's running on.
2. --publish: Uses the swarm's built-in routing mesh. In this case, port 80 is exposed on every node in the swarm. The routing mesh will route a request coming in on port 80 to one of the nodes running the container.

This command statement is declarative, and Docker Swarm will try to maintain the state declared in this command unless explicitly changed by another docker service command.

  1. Inspect the service: (Only from manager node)
$ sudo docker service ls
Inspect the service
1. $ docker node inspect worker1
2. $ docker node inspect self
3. $ docker node inspect worker2

2. Check the running container of the service

Check the running service by the command

$ docker service ps.

In our case it will be manager1 instead of node1

A task is another abstraction in Docker Swarm that represents the running instances of a service. In this case, there is a 1–1 mapping between a task and a container.

3. Test the service

Because of the routing mesh, it is possible to send a request to any node of the swarm on port 80. This request will be automatically routed to the one node that is running the NGINX container.

Try this command on each node:

$ curl localhost:80
# manager1
Note: Curling will output the hostname where the container is running. In our case, it is running on manager1

Step 8 || Scaling the service

In production, we might need to handle large amounts of traffic to our application, so we’ll learn how to scale. We’ll scale up our application following below steps:

  1. Update our service with an updated number of replicas:

Use the docker service command to update the NGINX service that we created previously to include 5 replicas. This is defining a new state for the service.

$ sudo docker service update --replicas=5 --detach=true nginx1

Following events occur by running this command:

  • The state of the service is updated to 5 replicas, which is stored in the swarm’s internal storage.
  • Docker Swarm recognizes that the number of replicas that is scheduled now does not match the declared state of 5.
  • Docker Swarm schedules 5 more tasks (containers) in an attempt to meet the declared state for the service.

This swarm is actively checking to see if the desired state is equal to actual state and will attempt to reconcile if needed.

2. Check the running instances:

within a few seconds, the swarm will accomplish its job successfully. Run following command to check the running instances:

$ sudo docker service ps nginx1

3. Send a lot of requests to http://localhost:80.

Now when you send requests on port 80, the routing mesh has multiple containers in which to route requests to. The routing mesh acts as a load balancer for these containers, alternating where it routes requests to.

$ curl localhost:80
$ curl localhost:80
$ curl localhost:80
$ curl localhost:80
$ curl localhost:80

Note: it doesn’t matter which node you send the requests. There is no connection between the node that receives the request and the node that that request is routed to.

4. Check the aggregated logs for the service:

Another easy way to see which nodes those requests were routed to is to check the aggregated logs. You can get aggregated logs for the service by using the command docker service logs [service name].

$ sudo docker service logs nginx1# you can see that each request was served by a different container.

Step 9|| Update the service

Now that you have your service deployed, you’ll see a release of your application. You are going to update the version of NGINX to version 1.13.

  1. Run the update command:
$ sudo docker service update --image nginx:1.13 --detach=true nginx1# This triggers a rolling update of the swarm.

2. Check the update:

$ sudo docker service ps nginx1

You have successfully updated your application!

Step 9 : Shutdown node

If you want to shut down a node you can do it by running below command:

1. $ sudo docker node update --availability drain worker1
# worker1 node will shut-down
2. $ sudo docker node ls
# verify the running node

Step 9 : Remove service

You can remove a service from all the machine by running following command:

$ sudo docker rm nginx1# In our case 'nginx1' is our service name

Verify the removed service:

$ sudo docker service ps nginx1
# no such service: nginx1

Step 10: Leave a swarm

If you want a worker node to leave the swarm run the following command from the desired worker node:

$ sudo docker swarm leave
# Node left the swarm
# verify it from manager node running:
$ sudo docker node ls

Step 10: Stop and remove machine

If you want to stop a machine run:

$ sudo docker-machine stop machineName

If you want to remove the machine run:

$ sudo docker-machine rm machineName

Note: You must run the stop and rm command from outside. Neither from manager nor from worker machine.

you can check my previous article to see how to dockerize a flask application clicking here

Congratulations!! By now You’ve understood docker swarm well, aren’t you??

how hackers start their afternoons.

Tasnuva Zaman

Written by

Software Engineer, Tech enthusiast

how hackers start their afternoons.

More From Medium

More from

More from

WTF is The Blockchain?

More from

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade