Deploy a service with Swarm mode of Engine 1.12

Purpose

The purpose of this article is to explain how to deploy a service on a cluster created with Swarm mode of Docker 1.12

Reminder

In a previous article, we created a Swarm using the swarm mode feature of Docker 1.12.

Our Swarm is made up of 2 manager nodes and 2 worker nodes (each node created with the excellent Docker Machine). Let’s see all our hosts.

$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager1 - virtualbox Running tcp://192.168.99.100:2376 v1.12.0-rc3
manager2 - virtualbox Running tcp://192.168.99.101:2376 v1.12.0-rc3
worker1 - virtualbox Running tcp://192.168.99.102:2376 v1.12.0-rc3
worker2 - virtualbox Running tcp://192.168.99.103:2376 v1.12.0-rc3

Using Swarm mode, we can list the nodes using docker node api:

$ docker-machine ssh manager1 docker node ls
ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
109a5ufy8e3ey17unqa16wbj7 manager2 Accepted Ready Active Reachable
4chbn8uphm1tidr93s64zknbq * manager1 Accepted Ready Active Leader
8nw7g1q0ehwq1jrvid1axtg5n worker2 Accepted Ready Active
8rrdjg4uf9jcj0ma2uy8rkw5v worker1 Accepted Ready Active

What about service ?

We already knew the service terminology with Docker Compose (especially version 2 where a services key appeared in docker-compose.yml files). 
Docker 1.12 pushed the notion of services one step further and created a dedicated api.

$ docker-machine ssh manager1 docker service --help
Usage:	docker service COMMAND
Manage Docker services
Options:
--help Print usage
Commands:
create Create a new service
inspect Display detailed information on one or more services
tasks List the tasks of a service
ls List services
rm Remove a service
scale Scale one or multiple services
update Update a service
Run 'docker service COMMAND --help' for more information on a command.

Let's use it to display the list of service currently available on our Swarm.

$ docker-machine ssh manager1 docker service ls
ID NAME REPLICAS IMAGE COMMAND

None... that makes sense as we haven't ran anything so far.

Let's deploy our first Node.js service

I've created a very simple Node.js application that consists of a web server that returns:
- a random city of the world
- the ip of the container that handled the request

Code can be found here (do not expect some really fancy stuff though :) ).

We'll use the docker service api to deploy our first service:

$ docker-machine ssh manager1 docker service create --name city --replicas 5 -p 8080:80/tcp lucj/randomcity:1.1
7e3qjrf03fkr6c8no9jyjyxcl

Let's details the parameters of the docker service command:

  • create: indicates we want to add a new service in our cluster
  • name, this parameter is quite obvious
  • replicas, the number of instances of the image specified
  • p, port the service will exposed to the outside of the Swarm
  • the last parameter is the name of the image that is used

An important thing to note here: by default Swarm mode provided us a IPVS (level4) load balancer for our service.

Let's check right away if our service is listed now.

$ docker-machine ssh manager1 docker service ls
ID NAME REPLICAS IMAGE COMMAND
7e3qjrf03fkr city 0/5 lucj/randomcity:1.1

Service is listed but none of our replicas (= number of instances of the image of our service) is running yet. If we run the same command a couple of second later, the output indicates all the replicas are running.

$ dm ssh manager1 docker service ls
ID NAME REPLICAS IMAGE COMMAND
7e3qjrf03fkr city 5/5 lucj/randomcity:1.1

And we can see how the tasks (=instance of the image) are dispatched on the Swarm.

$ dm ssh manager1 docker service tasks city
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
2g49orjnxzyb5suka45p66aub city.1 city lucj/randomcity:1.1 Running 23 seconds Running worker2
2q8igevbr23k22we9pe28wkwv city.2 city lucj/randomcity:1.1 Running 23 seconds Running manager1
agpgjtra8xtmfn5ac0q4gurct city.3 city lucj/randomcity:1.1 Running 23 seconds Running worker1
ajpk53fanot3zj1fi7zcxoole city.4 city lucj/randomcity:1.1 Running 23 seconds Running manager2
b8sgtjdxojlo5x0eei0xcufga city.5 city lucj/randomcity:1.1 Running 23 seconds Running worker2

This output tells us that one task (= container) is running on each of our 4 nodes, except for worker2 where containers are instanciated.

How to access our service

We specified a port mapping when we created the service (-p 8080:80), that makes our services available on port 8080 though the public IP of each host of our Swarm. Let's see that in action.

# On manager 1
$ curl http://192.168.99.100:8080
{"message":"10.255.0.12 suggests to visit Omantod"}
$ curl http://192.168.99.101:8080
{"message":"10.255.0.8 suggests to visit Doramug"}
# On manager2
$ curl http://192.168.99.101:8080
{"message":"10.255.0.12 suggests to visit Nuseva"}
# On manager 3
$ curl http://192.168.99.102:8080
{"message":"10.255.0.8 suggests to visit Uvehentac"}
$ curl http://192.168.99.102:8080
{"message":"10.255.0.12 suggests to visit Sukaazo"}
# Let's play a little bit more on manager 4
$ curl http://192.168.99.103:8080
{"message":"10.255.0.8 suggests to visit Miwlihmi"}
$ curl http://192.168.99.103:8080
{"message":"10.255.0.12 suggests to visit Ceotval"}
$ curl http://192.168.99.103:8080
{"message":"10.255.0.11 suggests to visit Viekieho"}
$ curl http://192.168.99.103:8080
{"message":"10.255.0.10 suggests to visit Bakublu"}
$ curl http://192.168.99.103:8080
{"message":"10.255.0.9 suggests to visit Roverdah"}
$ curl http://192.168.99.103:8080
{"message":"10.255.0.8 suggests to visit Pudifoni"}

We can observe here that the requests are handled in a round robin way (each tasks handles a request in turn)

Can we scale a little bit ?

Of course we can, there is an api for that:

$ docker-machine ssh manager1 docker service scale city=10
city scaled to 10

Let's verified everything is under control.

$ docker-machine ssh manager1 docker service ls
ID NAME REPLICAS IMAGE COMMAND
7e3qjrf03fkr city 10/10 lucj/randomcity:1.1
$ docker-machine ssh manager1 docker service tasks city
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
2g49orjnxzyb5suka45p66aub city.1 city lucj/randomcity:1.1 Running 29 minutes Running worker2
2q8igevbr23k22we9pe28wkwv city.2 city lucj/randomcity:1.1 Running 29 minutes Running manager1
agpgjtra8xtmfn5ac0q4gurct city.3 city lucj/randomcity:1.1 Running 29 minutes Running worker1
ajpk53fanot3zj1fi7zcxoole city.4 city lucj/randomcity:1.1 Running 29 minutes Running manager2
b8sgtjdxojlo5x0eei0xcufga city.5 city lucj/randomcity:1.1 Running 29 minutes Running worker2
255aj0y0gazgz6elbguaher2p city.6 city lucj/randomcity:1.1 Running 58 seconds Running manager1
9skqtrl96vyzgnyrzmm3o2pc3 city.7 city lucj/randomcity:1.1 Running 58 seconds Running manager2
0zf9yoifh2jd7e0oezs8dhowo city.8 city lucj/randomcity:1.1 Running 58 seconds Running worker1
3l2xekk0wxzi9rsc5vkmejnb6 city.9 city lucj/randomcity:1.1 Running 58 seconds Running worker1
0n8wqp9ngt0apc3xk0ain5yff city.10 city lucj/randomcity:1.1 Running 58 seconds Running worker2

Summary

We saw through an example how easy it is to create and scale an service using the service api. In the next article, we'll see how to deploy an application with several services.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.