DOCKER SWARM — Creating & Deploying services
If you’ve worked with Docker containers, you already understand how powerful they can be. But did you know you can exponentially up the power of Docker by creating a cluster of Docker hosts, called a Docker swarm? Believe it or not, this process is really simple. All you need is one machine to serve as a Docker swarm Manager and a few Docker hosts to join the swarm as Worker nodes.
Let’s get started!
From AWS console, navigate to EC2.
For this demo I will create 3 instance, 1 manager and other 2 worker.
Make sure that your instances are in same region and remember when setting up security groups, allow all traffic to pass through your instances from anywhere
Click review and launch.
Now we have our 3 instance in-place. A proper naming convention can help you identify which instance will become Manager and Workers.
Next, lets connect to these instances and install docker on each one of them.
First lets SSH into our Manager and install docker.
The installation process is well documented here. follow the instructions to install the latest version of docker engine.
After installing the docker you can check the version installed using
docker - -version
It will be best to have proper names for each nodes so that they can be easily identified. To change the name, to need to sign in as root-user and run command:
/ect/hostname is the directory that stores the name of the node.
Once you change the name, you need to restart your instance.
After installing docker on each machines using the same above mention steps, SSH into instances using three different terminal.
On all the terminals, sign-in as root-user.
Till now, we have only named these machines as manager and worker.
Run this command on the Manager machine:
docker swarm init --advertise-addr <manager_ip>
Once issued and completed, your Manager instance has now actually became the Manager of the swarm and following output should appear on your screen,
To add other machines as Worker node, blindly copy the command
docker swarm join - -token ….
onto the worker machines.
Once done, your screen will look like this
As a quick review, you can see that your worker machines displays a message,
This node joined a swarm as worker.
Let’s take a look at all that you have created in your first cluster. Run command docker node ls from a manager machine to view your Swarm’s connected nodes.
You can see we now have 1 managers and 2 workers.
Next, let us deploy a API service in this cluster.
With our Swarm up and running, let’s get a service deployed to see how the scheduling works. To start a service on the Swarm go back to manager machine.
Run the command:
docker service create — replicas 1 — name helloworld alpine ping docker.com
The arguments alpine ping docker.com define the service as an Alpine Linux container that executes the command ping docker.com.
to see the list of running services, use command
docker service ls
Now you’ve deployed a service to the swarm, you can view the details of the service:
To view the details in readable format we have used - -pretty flag.
To see which nodes are running the service, run
docker service ps <SERVICE-ID>
In this case, the one instance of the helloworld service is running on the Manager node. By default, manager nodes in a swarm can execute tasks just like worker nodes.
lets scale our service and then check what nodes are running these services. To scale, we will use:
docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS>
Containers running in a service are called tasks.
Now check which nodes are running the service
You can see that swarm has created 2 new tasks to scale to a total of 3 running instances of Alpine Linux. The tasks are distributed between the nodes of the swarm.
Let us SSH into one of the worker node and see the containers running on the node where you’re connected. Use:
Kudos!! you have witnessed Manager scheduling which containers to run where.
If you find it interesting, do give a clap!