In Detail about Docker Swarm

Tasnuva Zaman
HackerNoon.com
9 min readApr 12, 2019

--

Docker Swarm is a popular Orchestration solution.

Container Orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. The main responsibilities of container orchestrations are:

Some open source orchestration solutions are: Docker Swarm, Kubernetes, Apache Mesos, OPENSHIFT, Nomad etc.

Container Ecosystem Layers

Architecture of Swarm

The main architecture components of Swarm include:

Swarm:

  1. A set of nodes with at least one master node and several worker nodes that can be virtual or physical machines.

Service:

  1. The tasks defined by a swarm administrator that a manager or agent nodes must perform.
  2. It defines which container images the swarm should use and which commands the swarm will run in each container.

A service in this context is similar to a microservice; for example, it’s where you’d define configuration parameters for an nginx web server running in your swarm. Parameters for replicas also define in the service definition.

Manager node:

The manager node provides several functions after you deploy an application such as:

  1. It delivers work (in the form of tasks) to worker nodes,
  2. Manages the state of the swarm to which it belongs.

Worker nodes:

  1. Worker nodes run tasks distributed by the manager node in the swarm.
  2. Each worker node runs an agent that reports back to the master node about the state of the tasks assigned to it, so the manager node can keep track of services and tasks running in the swarm.

Task:

  1. Tasks are Docker containers that execute the commands you defined in the service.
  2. Manager nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another worker. If the task fails in a replica set, the manager will assign a new version of that task to another available node in the swarm.
Docker Swarm Architecture

Create Docker Swarm

Step 1|| Install Docker Machine

Docker Machine is a tool that lets you install DockerEngine on virtual hosts, and manage the hosts with docker-machine commands.

To install docker machine on ubuntu use the following command :

Step 2|| Create Docker machine to act as nodes for docker swarm

Create One manager machine and other worker machines

Create manager node:

Create two worker nodes:

Note: If you encounter docker-machine:Error with pre-create check: “exit status 126” you will have to install virtualbox on your machine.

check created machine list:

To check ip address of a specific machine run:

Step 3|| Connect manager and worker machine from terminal by ssh

Open 3 terminal windows and run following commands to connect manager1 worker1 and worker2 node:

Step 4|| Initialize Docker swarm

Initialize docker swarm on manager node by manager1 IP address running following command:

Note: this will only work in swarm manager not in worker machine

run docker node ls to verify manager. It will work only in swarm manager not in worker . You can check by running this command from worker machine in your terminal if you want.

Note: This docker swarm init command generates a join token. The token makes sure that no malicious nodes join the swarm. You need to use this token to join the other nodes to the swarm.

Step 5 || Join worker in the swarm

On both worker1 and worker2, copy and run the docker swarm join command that was outputted to your console by the last command:

hooa! You now have a three-node swarm!! verify it by running below command from your manager machine:

3-node swarm

Step 6|| Check for more info

On manager run standard docker commands

Step 7 || Run or deploy containers(service) on docker swarm

Now that we have our three-node Swarm cluster initialized, we’ll deploy some containers. To run containers on a Docker Swarm, we need to create a service.

Note: A service is an abstraction that represents multiple containers of the same image deployed across a distributed cluster.

Let’s do a simple example using NGINX. For now, we’ll create a service with one running container, but we will scale up later.

This command statement is declarative, and Docker Swarm will try to maintain the state declared in this command unless explicitly changed by another docker service command.

  1. Inspect the service: (Only from manager node)
Inspect the service

2. Check the running container of the service

Check the running service by the command

In our case it will be manager1 instead of node1

A task is another abstraction in Docker Swarm that represents the running instances of a service. In this case, there is a 1–1 mapping between a task and a container.

3. Test the service

Because of the routing mesh, it is possible to send a request to any node of the swarm on port 80. This request will be automatically routed to the one node that is running the NGINX container.

Try this command on each node:

Step 8 || Scaling the service

In production, we might need to handle large amounts of traffic to our application, so we’ll learn how to scale. We’ll scale up our application following below steps:

  1. Update our service with an updated number of replicas:

Use the docker service command to update the NGINX service that we created previously to include 5 replicas. This is defining a new state for the service.

Following events occur by running this command:

  • The state of the service is updated to 5 replicas, which is stored in the swarm’s internal storage.
  • Docker Swarm recognizes that the number of replicas that is scheduled now does not match the declared state of 5.
  • Docker Swarm schedules 5 more tasks (containers) in an attempt to meet the declared state for the service.

This swarm is actively checking to see if the desired state is equal to actual state and will attempt to reconcile if needed.

2. Check the running instances:

within a few seconds, the swarm will accomplish its job successfully. Run following command to check the running instances:

3. Send a lot of requests to http://localhost:80.

Now when you send requests on port 80, the routing mesh has multiple containers in which to route requests to. The routing mesh acts as a load balancer for these containers, alternating where it routes requests to.

Note: it doesn’t matter which node you send the requests. There is no connection between the node that receives the request and the node that that request is routed to.

4. Check the aggregated logs for the service:

Another easy way to see which nodes those requests were routed to is to check the aggregated logs. You can get aggregated logs for the service by using the command docker service logs [service name].

Step 9|| Update the service

Now that you have your service deployed, you’ll see a release of your application. You are going to update the version of NGINX to version 1.13.

  1. Run the update command:

2. Check the update:

You have successfully updated your application!

Step 9 : Shutdown node

If you want to shut down a node you can do it by running below command:

Step 9 : Remove service

You can remove a service from all the machine by running following command:

Verify the removed service:

Step 10: Leave a swarm

If you want a worker node to leave the swarm run the following command from the desired worker node:

Step 10: Stop and remove machine

If you want to stop a machine run:

If you want to remove the machine run:

Note: You must run the stop and rm command from outside. Neither from manager nor from worker machine.

you can check my previous article to see how to dockerize a flask application clicking here https://medium.com/@tasnuva2606/dockerize-flask-app-4998a378a6aa?source=friends_link&sk=93569352f150bf5e6141abc152654734.

Congratulations!! By now You’ve understood docker swarm well, aren’t you??

--

--