Deploy the Voting App on a Docker Swarm using Compose version 3

With Docker 1.13, it’s now possible to deploy a stack from a docker-compose file. Let’s test that and deploy the Voting App on a 3 nodes swarm.

Creation of the 3 nodes

Using Docker Machine, we will start by creating the nodes that will be part of our cluster. Those nodes will be named node1, node2 and node3 (not very original, I have to admit). We will use the virtualbox driver of Docker Machine got have the stuff locally.

$ docker-machine create --driver virtualbox node1
$ docker-machine create --driver virtualbox node2
$ docker-machine create --driver virtualbox node3

Let’s check everything is fine and get the IP of the nodes

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
node1 - virtualbox Running tcp://192.168.99.100:2376 v1.13.1
node2 — virtualbox Running tcp://192.168.99.101:2376 v1.13.1
node3 — virtualbox Running tcp://192.168.99.102:2376 v1.13.1

Swarm creation

1. Initialization

From node1 initialize the Swarm with the following command.

docker@node1:~$ docker swarm init --advertise-addr eth0
Swarm initialized: current node (lf2kfkj0442vplsxmr05lxjq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-5g6cl3sw2k76t76o19tc45cnk2b90f9e6vl7rj9iovdj0328o8-a1d8uz0jpya9ijhw2788bqb2p \
192.168.99.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Note: if there are several network interfaces, the --advertise-addr option needs to be provided to specify the one to use for the nodes to communicate together.

2. Adding a worker node

In the initialization step, the command to add a worker node was provided. Use this command from node2 to add this one to the cluster. The command should be like the following

docker@node2:~$ docker swarm join SWMTKN-1-5g6cl3sw2k76t76o19tc45cnk2b90f9e6vl7rj9iovdj0328o8-a1d8uz0jpya9ijhw2788bqb2p 192.168.99.100:2377
This node joined a swarm as a worker

3. Adding a manager node

To add a manager, we need to ask the current leader (node1) to provide a manager specific token. We can get it issuing the following command on node1.

docker@node1:~$ docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-07990ineggmmqgp2zdh7w85y26wd494wwa0f0hjqckim1jq11x-auytk3hyxqijes0mpanl5yax1 \
192.168.99.100:2377

Using the token provided, join node3 to the cluster.

docker@node3:~$ docker swarm join --token SWMTKN-1-07990ineggmmqgp2zdh7w85y26wd494wwa0f0hjqckim1jq11x-auytk3hyxqijes0mpanl5yax1 \
192.168.99.100:2377
This node joined a swarm as a manager

4. Checking the state of the cluster

docker@node1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
1m9rdh9hlkktiwaz3vweeppp node3 Ready Active Reachable
lf2kfkj0442vplsxmr05lxjq node1 Ready Active Leader
pes38pa6w8xteze6dj9lxx54 node2 Ready Active

Everything is fine, node1 is the leader, node2 is a worker, and node3 is a manager node in Reachable state (which means it’s healthy). Let’s jump to the Voting App.

The architecture of the Voting App

The Voting App is a micro-services application that is made of 5 services.

Docker’s voting app architecture (https://github.com/docker/example-voting-app)

- vote: front-end that enables a user to choose between a cat and a dog

- redis: database where votes are stored

- worker: service that get votes from redis and store the results in a postgres database

- db: the postgres database in which vote’s results are stored and retrieved from the result front-end

- result: front-end displaying the results of the vote

Get the Voting App’s compose file

The Voting App has several compose file as we can see in the github repository.

The last file added is the docker-stack.yml, this one contains a lot of options that illustrates the last features of Docker 1.13 and that enables to deploy the application as a stack (a group of services). We will then get this file and run it against our Swarm using the new option of the docker stack deploy command.

$ curl -O https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml

The content of the file is the following one:

version: "3"
services:
redis:
image: redis:alpine
ports:
- "6379"
networks:
- frontend
deploy:
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
db:
image: postgres:9.4
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
deploy:
placement:
constraints: [node.role == manager]
vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure
result:
image: dockersamples/examplevotingapp_result:before
ports:
- 5001:80
networks:
- backend
depends_on:
- db
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
worker:
image: dockersamples/examplevotingapp_worker
networks:
- frontend
- backend
deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
frontend:
backend:
volumes:
db-data:

Note: we can see there are 6 services defined in this file, but only 5 services are defined in the Voting App architecture. The additional one is the visualizer, this is a great tool that uses a pretty interface to indicate on which node the tasks of each service are deployed, we will see it in action soon.

Service configuration

From the previous file, we can see that each service has a deploy key. This key is the new one and it’s taken into account with Docker 1.13. It enables to specify options for the deployment of the services in the context of a Swarm.

Let’s take the worker service as an example.

deploy:
mode: replicated
replicas: 1
labels: [APP=VOTING]
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]

This declares the worker service to be in replicated mode (opposed to the global mode) and with only one replica. This mean that only one instance of the worker service will be created.

The restart policy indicates that in case of failure, the service will try to be restarted 3 times and will wait 10 seconds between each restart. On each restart, it will wait for 120 seconds before deciding if the restart went fine.

The placement option is the one we will take into account here as it defines where the service’s task (only one here) needs to be deployed. In the case of the worker, it should be deployed on a manager node, let’s check this.

deployment of the stack

We will deploy the stack on a manager node, running the following commands. We first need to switch to target the Docker daemon of node1 to run the deploy command.

$ eval $(docker-machine env node1)
$ docker stack deploy -c docker-stack.yml vote
Creating network vote_backend
Creating network vote_default
Creating network vote_frontend
Creating service vote_worker
Creating service vote_visualizer
Creating service vote_redis
Creating service vote_db
Creating service vote_vote
Creating service vote_result

Note: all the service names are prefixed by the name of the stack (vote in this example)

Check the existing stack

Let’s check that our vote stack is there and that all the services are listed.

docker@node1:~$ docker stack ls
NAME SERVICES
voting 6

Check the existing services

Let’s now see get some more details regarding the services deployed.

docker@node1:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
0b7wliblok8l vote_vote replicated 2/2 dockersamples/examplevotingapp_vote:before
nwl97sw2xci5 vote_redis replicated 2/2 redis:alpine
oa3k3lqhpkle vote_visualizer replicated 1/1 dockersamples/visualizer:stable
otmy5huzv333 vote_worker replicated 1/1 dockersamples/examplevotingapp_worker:latest
oub9r3pwunqd vote_db replicated 1/1 postgres:9.4
w1aek7v8f059 vote_result replicated 1/1 dockersamples/examplevotingapp_result:before

Routing Mesh

If we check the configuration of the visualizer service below, we can see that this service publish the port 8080 on the Swarm.

visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]

What does that mean ? It means that the service will be available on port 8080 on any node of the Swarm, even on node where the service is not running any task. This is a feature provided by the routing mesh network.

Let’s try to access the visualizer on node1 then.

From this display, we can see that the only task of the visualiser service is running on node2 but we are able to access it from node1… routing mesh in action here !

Let’s consider the vote service now. We can see it has 2 replicas, one deployed on node1 and the second one on node3. The number of replicas is defined under the deploy key of the compose file we retrieved above.

vote:
image: dockersamples/examplevotingapp_vote:before
ports:
- 5000:80
networks:
- frontend
depends_on:
- redis
deploy:
replicas: 2
update_config:
parallelism: 2
restart_policy:
condition: on-failure

We can also note this service publishes the port 80 onto the port 5000 on the Swarm. Through the routing mesh, each node should expose the port 5000 and provide the interface of the vote service then. As the vote service does not have any task on the node2, let’s open a web browser targeting this node.

We can see that the interface of the vote service is available from node2 even if no task is running for this service on node2.

Let’s select dog and check the results from the result service. The visualizer tells us only one replica of result is running and that this one is on node3. Let’s try to access it from node1 to see the routing mesh in action once again

We can that our vote has been taken into account, great !

Conclusion

This was a quick post to show how the creation of a stack is easy using a compose file in version 3 with Docker 1.13.

I highly recommend you to keep on eye on the Voting App as this one is updated very frequently to demonstrate Docker’s new features.