Hello Swarm mode

Jetsada Machom
Product & Engineering @ HotelQuickly
6 min readSep 23, 2016

Docker Swarm was a separate project around Docker for a while. Finally Docker Swarm merged in to Docker Engine start from version 1.12 with a slightly new name Swarm mode. Very similar to her successive brother Docker Compose(which not yet merged but was packed together in Docker Toolbox. It’s take both of them for a while before it was proved that they have value to be part of Docker Engine.

But what’s it?

Imagine if we want to run container in multiple host without tools. We’ll have to it one by one like this.

(node01)$ docker run -d nginx
(node02)$ docker run -d nginx
(node03)$ docker run -d nginx
...

But if you use Docker Swarm mode. You can just do it in 1 command.

(manager)$ docker service scale nginx=40

And that’s have now 40 containers run all over your servers without you have to do the manual work.

So in summary this is a Cluster management for Docker.

Visualization of general application cluster.

What does it look like?

(Whale icon refer to 1 Docker machine)

Above diagram is high level of a cluster structure.

Before we go beyond this point, In my POV the best way to learn how doesn’t it work is just to build it up. Let’s quickly do it now.

My current setup:
- OSX El Capitan
- Docker Toolbox for Mac (Docker Machine + VirtualBox)
- Docker Engine version 1.21.1 (require 1.21+)

Let’s start with create 3 Docker Machine.

docker-machine create --driver virtualbox node1
docker-machine create --driver virtualbox node2
docker-machine create --driver virtualbox node3
docker-machine ls
New 3 Docker Machine

This comment will create new 3 Docker Machine with name node1, node2, node3 in our VirtualBox. This basically simulate that we have 3 servers in real world.

The last command should show all of our new Docker Machine we just created.

Now we need to understand a bit about Docker CLI in our terminal. Docker designed to be interact with REST API. For example when we execute command

docker ps

in our terminal. What CLI does is basically make http request to Docker Engine and get JSON result back then parse and show result to terminal.

Ok, so far so good right?
Next we’ll start to create a cluster. Which basically mean we have to create a cluster manager. Our roughly plan is to make
node1 ~manager
node2 ~ worker
node3 ~ worker

So let’s make node1 a manager.

eval $(docker-machine env node1)
docker cli attached to node1

This command make sure that from now on docker cli will call REST API to node1. That’s mean whatever we run with docker command will take effect on node1.

Next we have to get ip address of node1. We can get that information from

docker-machine ls

Ok now we’re ready to create a cluster manager

docker swarm init --advertise-addr 192.168.99.103
cluster manager created

node1 is now our cluster manager and by default it’s also assign to be worker in cluster also(don’t worry we can adjust this later). After you run command it’ll return you another command with some token. Please copy that command because we have to use it to join cluster.

Next we’ll have to make node2 to join our cluster. But first we have to change our docker cli to call to node2

eval $(docker-machine env node2)
docker cli attached to node2

Next we just run command that we got from node1

docker swarm join \
--token SWMTKN-1-2rhyq6dn1dmcok31k763eccl6yg11x9w49kjihe956cu162v6u-a1ehtog2sbkg6qm80yud9q4om \
192.168.99.103:2377
node2 joined cluster

Now node2 joined cluster. That’s easy right? Let’s do it with node3 also.

eval $(docker-machine env node3)docker swarm join \
--token SWMTKN-1-2rhyq6dn1dmcok31k763eccl6yg11x9w49kjihe956cu162v6u-a1ehtog2sbkg6qm80yud9q4om \
192.168.99.103:2377
node3 joined cluster

Now we have to go back to node1 and check if all were connected correctly.

eval $(docker-machine env node1)
docker node ls

At this point we have created a cluster of 3 Docker Engine. Yay!

To demonstrate power of cluster manager I quickly made a microservice application that contain 3 parts worker, redis, webui. Flow is worker basically generate some hash data and store in redis. Then we can access webui to see show progress goes.

Ok, you’ll have to create network in cluster.

docker network create --driver overlay mynet
network created within cluster

This command will create overlay network name “mynet” within cluster to isolate and allow DNS auto discovery so you can resolve container by service name(instead of ip).

Next we’ll have to create service(aka Auto scaling group in AWS).

docker service create --name redis --network mynet redisdocker service create --name worker --network mynet zinuzoid/docker-swarm-tutorial-workerdocker service create --name webui -p 80:80 --network mynet zinuzoid/docker-swarm-tutorial-webui
docker service ls

This will create service with specific image that prepared(my microservice application). And by default it’ll start 1 container for each service. Manager will be the one who assign which node should run the container. The last command will show you list of services we just created. It’ll looks something like this

6tylre1bkfrp  webui   1/1       zinuzoid/docker-swarm-tutorial-webui
6uravw1n4y3d worker 1/1 zinuzoid/docker-swarm-tutorial-worker
7dn95f4xdjy4 redis 1/1 redis

If you notice the number “1/1”. Meaning of the number are “active/target” container in each service. You maybe get 0/1 result. That’s mean no container active yet, probably their are initializing the container. If you wait for a few minute it should finally 1/1.

http://192.168.99.103/

Now we can access webui by access any ip of Docker Engine in the cluster. This is one of the magic of Swarm mode. They have route mesh that allow this to happen. We don’t have to really know where webui container is running. Docker Engine will just forward out request to the right container with hassle-free from our side. It should be something like this. (ps. you can get ip from docker-machine ls)

webui should show something like this. Now the fun part, we’re about to scale our worker to speed up mining speed. And here’s the magic command.

docker service scale worker=3

Now Swarm will start 2 more containers to reach target(3) that we just set. And in webui browser you should see mining speed increasing.

If you wanna see where are our worker container running. Just run this.

docker service ps worker

It’ll show you list of task with in service and where it’s running.

3se7lcj2hkwayq4zbdfjo78a8  worker.1      zinuzoid/docker-swarm-tutorial-worker  node3  Running        Running 36 seconds ago9gexguhu1pi29wqke5ei8obly  worker.2      zinuzoid/docker-swarm-tutorial-worker  node1  Running        Running 2 seconds ago597gcaeudwq6ct5dpjwncyegw  worker.3      zinuzoid/docker-swarm-tutorial-worker  node2  Running       Running 5 minutes ago
Containers are spread across each node automatically

Congratulation you just created Docker cluster by using Swarm mode. I hope you have idea what it is, how it’s works. Still Swarm mode have lots of potential far more advance than what’s we just did. Here’re some highlight.

Decentralized design, Declarative service model, Scaling, Desired state reconciliation, Multi-host networking, Service discovery, Load balancing, Secure by default, Rolling updates.

You can check more at https://docs.docker.com/engine/swarm/

--

--

Jetsada Machom
Product & Engineering @ HotelQuickly

*alarm*…*snooze*….*alarm*….*snooze*….*alarm*..*checks time*…”Oh shit!”