MicroServices From Development To Production Using Docker, Docker Compose & Docker Swarm

Aymen El Amri
Jul 17, 2016 · 11 min read

Docker is a powerful tool, however learning how to use it the right way could take a long time especially with the rapidly growing ecosystem of containers which could be confusing, that is why I had the idea to start writing Painless Docker.

Painless Docker is a complete and detailed guide (for beginners and intermediate levels) to create, deploy, optimize, secure, trace, debug, log,orchestrate & monitor Docker and Docker clusters in order to create a good quality microservices applications.

This article is detailed in this book. You can preorder it here.

Image for post
Image for post

The latest Docker Swarm release candidate is solving many problems, that’s why I decided to test it and see if I can use it in production. I tested Kubernetes but the fact that it is not a built-in tool is an inconvenient even if it has more features. This Docker release integrates Docker Swarm with the Docker Engine.

Let’s see in details how I set up a cluster of 3 nodes using Docker 1.2.

Before Docker 1.2, I tested Docker Swarm, your infrastructure architecture for the Swarm cluster will look like this, if you have tried it too:

Image for post
Image for post
Docker Swarm — Source: docker.com

I am using Digital Ocean for this tutorial but here is what things will look like when using the Swarm Mode with AWS:

Image for post
Image for post
Swarm Mode on AWS — Source: docker.com

3 nodes, each in a different availability zone/subnet and each node is a Swarm manger with one Leader (or primary node) and 2 secondary managers.

Let’s see how to implement this using 3 Digital Ocean Droplets.

Creating Swarm Nodes

We start by creating 3 virtual machines in the same private network (if you are using DigitalOcean make sure to activate the “private networking”).

You can see that theses machines have 3 public IP addresses but they also have 3 private networking IP addresses and this is what we are going to use:

Node1: 10.136.9.230Node2: 10.136.12.9Node3: 10.136.12.10

Private IP addresses will be used for private networking inside the Swarm cluster.

The public IP addresses of our 3 VMs are:

Node1: 192.241.155.28Node2: 192.241.148.163Node3: 192.81.216.250

We are going to use this with our sample application.

Installing Docker 1.12, Swarm & Docker Compose

sudo apt-get -y updateapt-get -y install apt-transport-https ca-certificatesapt-key adv — keyserver hkp://p80.pool.sks-keyservers.net:80 — recv-keys 58118E89F3A912897C070ADBF76221572C52609Decho “deb https://apt.dockerproject.org/repo ubuntu-trusty experimental” > /etc/apt/sources.list.d/docker.listsudo apt-get -y updatesudo apt-get purge lxc-dockersudo apt-get install -y docker-engine

Docker Compose

curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-composechmod +x /usr/local/bin/docker-compose

We will use Docker Compose later.

The Swarm Cluster

In this Swarm cluster we made the choice that Node1 will be the leader and that the Swarm Cluster leader will listen on the private network:

docker swarm init --listen-addr 10.136.9.230

Typing this command will output a second command that you should type on the other nodes in order to join the leader:

docker swarm join --secret avekzwl42a9j8vcn8ct0bvsie --ca-hash sha256:81159d56e9ad1cd070dacdec86ee14f9c0b0fb183cac9d5de8a8ab12109aa1e 10.136.9.230:2377

The choice of using private networking is important for your security especially that Docker Swarm is in its beta version.

Anyway, type the last generated command on the other nodes (node2 & node3).

Image for post
Image for post

Let’s type the generated command on the node2 and see what happens on the node1:

docker swarm join --secret avekzwl42a9j8vcn8ct0bvsie --ca-hash sha256:81159d56e9ad1cd070dacdec86ee14f9c0b0fb183cac9d5de8a8ab12109aa1e 10.136.9.230:2377This node joined a Swarm as a worker.

Well it says that “This node joined a Swarm as a worker.”. Let’s go back to the master and verify it:

docker node ls
ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
9omeucf06tc16u3ifh817jxh2 node2 Accepted Ready Active
at14oahgfhct04mixjod9w2w1 * node1 Accepted Ready Active Leader

Using Docker node command we can list all the nodes that have already joined the cluster.

Do the same thing in node3 and verify again on the master (node1)

docker node ls
ID HOSTNAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS
9omeucf06tc16u3ifh817jxh2 node2 Accepted Ready Active
at14oahgfhct04mixjod9w2w1 * node1 Accepted Ready Active Leader
b7c9h10h1rrzgpotqocoykj6j node3 Accepted Ready Active

In a production environment I would think about more security.

Apart from using private networking, you can use some other options when starting a cluster like refusing any auto joining request until approved:

--auto-accept none

Using a validity period for your node certificates could be another security enhancement:

--cert-expiry

You can also give the cluster an explicit secret string:

 --secret string

You can use the Inspect command to get more information about your Swarm cluster:

# docker swarm inspect[
{
"ID": "242c5wqi9xcuihozygj6s9wma",
"Version": {
"Index": 662
},
"CreatedAt": "2016-07-17T16:49:04.552266747Z",
"UpdatedAt": "2016-07-17T17:04:41.0267578Z",
"Spec": {
"Name": "default",
"AcceptancePolicy": {
"Policies": [
{
"Role": "worker",
"Autoaccept": true,
"Secret": "$2a$10$PKqKQH.fBMeQgygtxqN1F.e/ySMaOn9w1BRHUz40Lt50IHSVoYt3e"
},
{
"Role": "manager",
"Autoaccept": false,
"Secret": "$2a$10$PKqKQH.fBMeQgygtxqN1F.e/ySMaOn9w1BRHUz40Lt50IHSVoYt3e"
}
]
},
"Orchestration": {
"TaskHistoryRetentionLimit": 10
},
"Raft": {
"SnapshotInterval": 10000,
"LogEntriesForSlowFollowers": 500,
"HeartbeatTick": 1,
"ElectionTick": 3
},
"Dispatcher": {
"HeartbeatPeriod": 5000000000
},
"CAConfig": {
"NodeCertExpiry": 7776000000000000
}
}
}
]

Using Docker Compose For Development

When I started working with Docker, I run Docker Compose in production to create and scale services. It was, at least for me, quite stable.

I realised also that Docker Compose is the right tool for me to adopt Docker from the development phase to the production.

Developers can use the same container image that runs in production, they get it from the Docker Hub or a private Docker registry. While any modification on the image is pushed to the registry, they can still use exactly the same production configuration and environment.

In this example I am using a simple web application developed by Docker for demonstration purposes.

This is just a sample but it is a reproducible template in order to create a node of complex microservices and a cluster of swarm production nodes.

mkdir -p /apps/app1/cd /apps/app1

We are going to use the following code for our Docker Compose file:

version: '2'
services:
webapp:
container_name: vote
image: instavote/vote
ports:
- 80:80

Let’s push this to the Docker Hub. (you can use your own private Docker registry instead).

docker commit -m “first commit” -a “Aymen El Amri” ea534014a41f eon01/vote:v1docker push eon01/vote:v1
Image for post
Image for post

One of the best things about Docker is the fact that images are distributables. In this example I used a web app for demonstration, but let’s suppose that we have made some modifications (after mounting the application directory and getting out the code from the container).

Swarm To Deploy Services

Now that we have our Docker Swarm Cluster (Node1 + Node2 + Node3), that we made our modification on the vote app container & code and that everything is ready for deployment, let’s create the service

root@node1:~# docker service create --name vote -p 8080:80 eon01/vote:v1
21nd38tu0kpuy52g7cdu4oz23
Image for post
Image for post

Let’s see what are the services that we run on our Leader machine:

root@node1:~# docker service ls
ID NAME REPLICAS IMAGE COMMAND
1asx5ueen9ii vote 10/10 eon01/vote:v1

Swarm To Scale Services

A single command to scale the existing service that we called “vote”.

root@node1:~# docker service scale vote=10

We run the command on the Leader node (Node1). Notice that on the same node when running

docker ps

You will not find all of the ten containers in the same machine, but scaled containers will be deployed to the whole cluster = 3 nodes.

Node1:

root@node1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e23f5d89997 eon01/vote:v1 "gunicorn app:app -b " 8 minutes ago Up 8 minutes 80/tcp vote.51.95jujipdhd1dm4trkn8o0kgj3
87783db001fc eon01/vote:v1 "gunicorn app:app -b " 8 minutes ago Up 8 minutes 80/tcp vote.44.aokel9gezo8p9ne8wp52ghm3c
c6e265649145 eon01/vote:v1 "gunicorn app:app -b " 8 minutes ago Up 8 minutes 80/tcp vote.55.efvkiwobwe9w1h7h7ddfsoaor
37f8418cf488 eon01/vote:v1 "gunicorn app:app -b " 8 minutes ago Up 8 minutes 80/tcp vote.12.90m26t7x4jjduwoh41nclyxi4

Node2:

root@node2:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e4a2a7968ab eon01/vote:v1 "gunicorn app:app -b " 7 minutes ago Up 7 minutes 80/tcp vote.38.adln4aqu0zbrm54kakxyfiejz
f063cf489756 eon01/vote:v1 "gunicorn app:app -b " 7 minutes ago Up 7 minutes 80/tcp vote.85.0rqxmi2h8vnn63rcu94xtv9d1

Node3:

root@node3:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb3c50833abc eon01/vote:v1 "gunicorn app:app -b " 7 minutes ago Up 7 minutes 80/tcp vote.69.4509f465ozde6t6yi8ghsa5k1
6baab4ba33d2 eon01/vote:v1 "gunicorn app:app -b " 8 minutes ago Up 8 minutes 80/tcp vote.88.bukyzg40dcgexyditlkyf58p7

You can also use Tasks command from the Leader/Master node (Node1) to list all the nodes and all the containers.

root@node1:~# docker service tasks vote
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
90m26t7x4jjduwoh41nclyxi4 vote.12 vote eon01/vote:v1 Running 11 minutes ago Running node1
70s3edthlmat8avlnftqn6g9l vote.15 vote eon01/vote:v1 Accepted 3 seconds ago Accepted node2
64devmstxti4rxjdlu3o6tkj1 vote.24 vote eon01/vote:v1 Accepted 4 seconds ago Accepted node3
adln4aqu0zbrm54kakxyfiejz vote.38 vote eon01/vote:v1 Running 10 minutes ago Running node2
aokel9gezo8p9ne8wp52ghm3c vote.44 vote eon01/vote:v1 Running 11 minutes ago Running node1
95jujipdhd1dm4trkn8o0kgj3 vote.51 vote eon01/vote:v1 Running 11 minutes ago Running node1
efvkiwobwe9w1h7h7ddfsoaor vote.55 vote eon01/vote:v1 Running 11 minutes ago Running node1
4509f465ozde6t6yi8ghsa5k1 vote.69 vote eon01/vote:v1 Running 9 minutes ago Running node3
0rqxmi2h8vnn63rcu94xtv9d1 vote.85 vote eon01/vote:v1 Running 10 minutes ago Running node2
bukyzg40dcgexyditlkyf58p7 vote.88 vote eon01/vote:v1 Running 10 minutes ago Running node3

This is the AB test and the difference before and after scaling the services to 5 (note that I haven’t done any other optimisations):

Image for post
Image for post

Swarm To Inspect Services

root@node1:~# docker service inspect vote[
{
"ID": "1asx5ueen9iicfokiz7rcjhmq",
"Version": {
"Index": 1046
},
"CreatedAt": "2016-07-17T16:50:28.903732962Z",
"UpdatedAt": "2016-07-17T17:05:50.657027966Z",
"Spec": {
"Name": "vote",
"TaskTemplate": {
"ContainerSpec": {
"Image": "eon01/vote:v1"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"Mode": {
"Replicated": {
"Replicas": 10
}
},
"UpdateConfig": {},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080
}
],
"VirtualIPs": [
{
"NetworkID": "7xz4ywzhy7wti6121sfx8emoq",
"Addr": "10.255.0.2/16"
}
]
}
}
]

Swarm To Update Running Services

We have already deployed the v1 if you noticed that our repository tag was:

eon01/vote:v1

Now we would like to update the running service with the v2.

docker commit -m “first commit” -a “Aymen El Amri” vote eon01/vote:v2docker push eon01/vote:v2

We should in this case log to the master and re-deploy while updating the image tag:

root@node1:~# docker service update — image eon01/vote:v2 vote
vote

The last command will update the container from v1 to v2 in the Leader node and on the other slaves.

root@node2:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2a5d27c9b7a eon01/vote:v2 "gunicorn app:app -b " 27 seconds ago Up 23 seconds 80/tcp vote.88.75v272k1ejipsuhrx849tkhgp
6c17999781ac eon01/vote:v2 "gunicorn app:app -b " 27 seconds ago Up 23 seconds 80/tcp vote.12.ax0nh98as1xmu0q69vcsfq8sq
9549f47cb055 eon01/vote:v2 "gunicorn app:app -b " 27 seconds ago Up 25 seconds 80/tcp vote.85.1wm3ea9rfi6c8v8qlp7h2i4kv
8ce09c29b4dc eon01/vote:v2 "gunicorn app:app -b " 27 seconds ago Up 23 seconds 80/tcp vote.51.etdcuijr8vsx7eyy0ptvlbbzm

Deleting Services

In order to delete a service use the rm command:

docker rm vote

Removing Nodes From Our Cluster

Like a node can join a cluster, it can also leave it.

Let's force the Node2 to leave the cluster:

root@node2:~# docker swarm leave
Node left the swarm.

What about the 10 containers running simultaneously in the 3 nodes ?

root@node1:~# docker  service tasks vote
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
evhx77i6b1575sj5oxu5b5yer vote.12 vote eon01/vote:v2 Accepted 1 seconds ago Accepted node3
03kxbhenyroxtpqdkom2mn4z6 vote.15 vote eon01/vote:v2 Running 35 minutes ago Running node3
438eruwgle9cwhwav26vrgkhk vote.24 vote eon01/vote:v2 Running 35 minutes ago Running node3
93apj9vyt7hinarvfp9geg7sv vote.38 vote eon01/vote:v2 Running 35 minutes ago Running node3
1xt7e9xm3d6yuuu87o2bpb4zp vote.44 vote eon01/vote:v2 Running 35 minutes ago Running node1
1hg5b5bhnqlpubmldnknz7qxp vote.51 vote eon01/vote:v2 Running about a minute ago Running node1
0vwormex59myle3n918s11ich vote.55 vote eon01/vote:v2 Running 35 minutes ago Running node3
2qm4v0c47qfye2izglixb0h21 vote.69 vote eon01/vote:v2 Running 35 minutes ago Running node1
10dn58my2tbq6kxa534cchcu1 vote.85 vote eon01/vote:v2 Running about a minute ago Running node1
39m8ia95h0d3gtkco6rf1ii2q vote.88 vote eon01/vote:v2 Running about a minute ago Running node1

Notice that the 10 containers are re-distributed again on the 2 left nodes (Node1 & Node3).

What I noticed also that redistribution was not instantaneous which is not a good point for production.

Other Docker Swarm Features

Docker Swarm in its latest version is not just a separated orchestration tool, but a built-in cluster management tool with secure and distributed design. There is no need, for example, to use Nginx, Haproxy or Traefik as a backend load balancer for your microservices.

As you can also notice the RC version has several features:

  • Cluster management integrated with Docker Engine
  • Decentralized design
  • Declarative service model
  • Scaling
  • Desired state reconciliation
  • Multi-host networking
  • Service discovery
  • Load balancing
  • Secure by default
  • Rolling updates

What is missing ?

The built-in Docker Swarm was just released and there are may be more modifications or enhancements but what I have noticed is that Docker Compose is not yet integrated.

When a node leaves a cluster, it should at least wait for the other nodes to handle the containers running on the disappearing node but I have noticed that typing on the Node2

docker swarm leave

will result in the disappearance of the node from the cluster but few seconds later, I can still see the Node2 as up (So I imagine it is still getting traffic).

root@node1:~# docker  service tasks vote
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
ax0nh98as1xmu0q69vcsfq8sq vote.12 vote eon01/vote:v2 Running 33 minutes ago Running node2
03kxbhenyroxtpqdkom2mn4z6 vote.15 vote eon01/vote:v2 Running 33 minutes ago Running node3
438eruwgle9cwhwav26vrgkhk vote.24 vote eon01/vote:v2 Running 33 minutes ago Running node3
93apj9vyt7hinarvfp9geg7sv vote.38 vote eon01/vote:v2 Running 33 minutes ago Running node3
1xt7e9xm3d6yuuu87o2bpb4zp vote.44 vote eon01/vote:v2 Running 33 minutes ago Running node1
etdcuijr8vsx7eyy0ptvlbbzm vote.51 vote eon01/vote:v2 Running 33 minutes ago Running node2
0vwormex59myle3n918s11ich vote.55 vote eon01/vote:v2 Running 33 minutes ago Running node3
2qm4v0c47qfye2izglixb0h21 vote.69 vote eon01/vote:v2 Running 33 minutes ago Running node1
1wm3ea9rfi6c8v8qlp7h2i4kv vote.85 vote eon01/vote:v2 Running 33 minutes ago Running node2
75v272k1ejipsuhrx849tkhgp vote.88 vote eon01/vote:v2 Running 33 minutes ago Running node2

What We Have Seen In This Tutorial

In this tutorial we used a single service with three nodes (1 master and two slaves), in Swarm mode, you can easily create containers from this service and scale it to many containers instances.

We have seen also how to use Docker Compose to share images with developers (images are pulled from Docker registry to stay up to date).

We used a single micro service but this is a template that could be easily reproduced with many other micro services.

Actually, I may use this version of Docker Swarm in production but not in critical and high load applications like databases and I am looking forward to discovering the final version.

Connect Deeper

If you resonated with this article, please subscribe to DevOpsLinks : An Online Community Of Diverse & Passionate DevOps, SysAdmins & Developers From All Over The World.

You can find me on Twitter, Clarity or my blog and you can also check my books: SaltStack For DevOps,The Jumpstart Up & Painless Docker.

Don’t forget to recommend this article to your followers and share it.

Image for post
Image for post

Join our community Slack and read our weekly Faun topics ⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts

Sign up for FAUN

By FAUN

Medium’s largest and most followed independent DevOps publication. Join thousands of aspiring developers and DevOps enthusiasts Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Aymen El Amri

Written by

Aymen El Amri is the founder and CEO of www.eralabs.io and www.faun.dev community. He is a tech author, cloud-native architect, entrepreneur and startup advisor

FAUN

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Aymen El Amri

Written by

Aymen El Amri is the founder and CEO of www.eralabs.io and www.faun.dev community. He is a tech author, cloud-native architect, entrepreneur and startup advisor

FAUN

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store