Docker Swarm: Deploying a 3-Tier Architecture

“Swarming the Dock”

Ifeanyi Otuonye (REXTECH)
Nerd For Tech
9 min readMar 8, 2023

--

Intro

Let’s get familiar with Docker Swarm by first defining it. Docker Swarm is a container orchestration tool that allows you to manage a cluster of Docker nodes and deploy and scale your applications across them.

Let’s take an example using a house. Imagine you have a house with different rooms (nodes) and you want to control the temperature (application) throughout the entire house. With Docker Swarm, you can create a cluster of nodes and deploy your application (temperature control) to each node.

When you want to scale your application, let’s say you want to add another temperature sensor in one of the rooms, Docker Swarm makes it easy to add that node to the cluster and distribute the application to it.

Additionally, Docker Swarm provides built-in load balancing and failover mechanisms to ensure that your application is highly available and resilient.

Today, I’m going to show you how we can use leverage Docker Swarm to create cluster of nodes with one manager deployed on Amazon EC2 Instance on the Amazon Web Services cloud platform.

Docker Swarm Info

Docker Swarm cluster

A Docker Swarm cluster is a group of Docker nodes that work together to provide a highly available and scalable platform for deploying and running applications using Docker containers.

Docker Swarm node

A node in Docker Swarm refers to a physical or virtual machine that is part of a Docker Swarm cluster. It can be either a Manager node or a Worker node.

Docker stack deploy

The docker stack deploy command is used to deploy a stack to a Docker Swarm cluster.

Manager node

A manager node is a node that manages the Swarm cluster and coordinates the tasks that run on worker nodes.

Worker node

A worker node is a node that runs tasks and services as directed by the Manager node.

Docker service

The docker service command is used to manage services in a Docker Swarm cluster.

Prerequisites

  • Basic knowledge and understanding of containerization and Docker
  • Basic Linux command line knowledge
  • AWS Account with an IAM user

Objectives

1. Create a Docker Swarm of Amazon EC2 Instances that consists of one manager and three worker nodes.

2. Create a Docker Stack to —

— Deploy a service based on the Redis docker image with 4 replicas across the cluster.

— Deploy a service based on the Apache docker image with 10 replicas across the cluster.

— Deploy a service based on the Postgres docker image with 1 replica across the cluster.

3. Ensure no stacks run on the Manager (administrative) node.

Step 0: Set up Amazon EC2 Instance environment

Launch and connect into four EC2 Instances

Head to my previous article “WebEC2tra! WebEC2Tra! Read All About It!and follow Step 1 to learn how to launch and configure EC2 Instances for this demonstration.

When creating a Security Group for the EC2 Instances, make sure to add the following rules with source 0.0.0.0/0

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic

Scroll down to the “Advanced details” section, scroll down to “User data”, then paste this code below into the text box.

#!/bin/bash

#Update all yum package repositories
yum update -y

#Install Docker
yum install -y docker

#Start Docker service
systemctl start docker.service

#Enable Docker service automatically on boot up
systemctl enable docker.service

In the “Summary” pane on the left, make sure to change the “Number of instances” to 4, then click “Launch instance”.

Navigate back to your EC2 Instances and verify the launch of the four Instances. You can also edit the names of the EC2 Instances to note the manager and also the worker nodes.

Connect into the four EC2 Instances

For simplicity sake, we will use “EC2 Instance Connect” to connect into each EC2 Instance.

To connect to an EC2 Instance, click “Connect” on the top right, choose the “EC2 Instance Connect” tab, then click “Connect”. This should open a new tab.

Switch to the root user by running the following command —

sudo su

Now, as the root user, we can check the Docker Client and Server version to verify connectivity to the Docker server.

Run the following command to check the Docker version —

docker version

You should receive an output of the version information of both the Docker Client and Docker Server, as seen below.

Remember to repeat these steps for all the EC2 Instances.

Now, that we’ve set up our environment, let’s proceed to Step 1 — Initializing Docker Swarm on the manager EC2 Instance and join the other worker node instances to the Swarm.

Step 1: Initialize Docker Swarm and join others to Swarm

We will initialize Docker Swarm on one of the EC2 Instance which will be appointed as the Swarm manager. The other three instances will therefore be worker nodes which will have to join the Swarm.

To initialize Docker Swarm on the designated manager mode, run the following command on the EC2 Instance —

docker swarm init

This will output a command, as shown below, that will be used to join the other EC2 Instance worker nodes to the Swarm.

docker swarm join --token <token> <manager-ip>:2377

Replace “<token>” with the token generated from the results of the “docker swarm init” command and “<manager-ip>” with the IP address of the Swarm manager, as seen below —

On the remaining three EC2 instances, join them to the Swarm by running the command provided.

After running the command on each remaining EC2 Instance, you should received the message that states “This node joined a swarm as a worker”, as seen below.

Verify that each node has successfully joined the Swarm by running the following command on the Swarm manager —

docker node ls

This will output a list of all the nodes in the Swarm, as shown below.

Great! we have now initialized Docker Swarm on a manager EC2 Instance and joined the worker nodes to the Swarm. Now, let’s proceed to creating a Docker stack using a Docker Compose file.

Step 2: Create a Docker Stack using Docker Compose file

Let’s create a Docker file that will define our 3-tier architecture services to be deployed as a stack across the Docker Swarm cluster by running the following command using the nano text editor —

nano docker-compose.yml

A text editor window should open. Copy and paste the code below into the text editor.

version: "3.9"

services:
redis:
image: redis:alpine
deploy:
replicas: 4
placement:
constraints:
- node.role == worker

apache:
image: httpd:alpine
deploy:
replicas: 10
placement:
constraints:
- node.role == worker

postgres:
image: postgres:alpine
deploy:
replicas: 1
placement:
constraints:
- node.role == worker
environment:
POSTGRES_PASSWORD: mysecretpassword

Let’s break down the contents of this file —

The Docker Compose file defines a set of services that can be deployed as a single unit, using the “docker stack deploy” command which we will use subsequently. The file defines three services named “redis”, “apache”, and “postgres”.

The “version” field at the top of the file specifies the version of the Docker Compose file format and in this case, the version is “3.9.

For the “redis” service, the Compose file specifies that it should use the official Redis image of version alpine (image: redis:alpine) and that it should be deployed with 4 replicas (replicas: 4) using the deploy section.

For the “apache” service, the Compose file specifies that it should use the official Apache HTTP Server image of version alpine (image: httpd:alpine), and that it should be deployed with 10 replicas (replicas: 10) using the “deploy” section.

For the “postgres” service, the Compose file specifies that it should use the official PostgreSQL image of version alpine (image: postgres:alpine), and that it should be deployed with 1 replica (replicas: 1) using the “deploy” section. The environment variable “POSTGRES_PASSWORD” is set to “mysecretpassword”. This password will be used to initialize and access the Postgres database.

The “deploy” section is used to configure deployment-related settings for each service, such as the number of replicas, placement constraints, and update policies. When this Compose file is used to deploy the stack, the specified number of replicas for each service will be created and distributed across the Docker Swarm cluster, as specified by the deployment configuration.

The “placement” sections of each service explicitly defines the placement of the service across the Docker Swarm. The use of the “node.role” placement strategy with the value “worker”, specifies that the service should only run on the worker nodes and not the manager node.

Proceed by typing the “Ctrl and O” key together to save the file, then press Enter. Type “Ctrl and X” to exit the text editor.

Now that we have created and configured out Docker compose file, let’s proceed to Step 3 — Deploying the stack to the Docker Swarm cluster.

Step 3: Deploy the Stack to Docker Swarm cluster

To deploy the stack to the Docker Swarm cluster, run the following command on the Docker Swarm master EC2 Instance —

Note — The “-c” flag is used to specify the path to the Docker Compose file that defines the stack. The “3tier” argument specifies the name of the stack.

docker stack deploy -c docker-compose.yml 3tier

Run the following command to verify that the service is running which checks the service status —

docker service ls

You should be able to see all the services up and running!

Verify on the worker nodes that the services has been deployed successfully by running the following command to list all the containers running on each worker node —

docker ps

You should see all deployed services spread across the three woker nodes a shown below.

Worker node 1 —

Worker node 2—

Worker node 3 —

Success!

You’ve deploy your first stack across a Docker Swarm cluster on EC2 Instances with a manager node and worker nodes!

Now, let’s head to the last step of our objectives, Step 5 — Ensuring that there aren’t any stacks running on the manager node.

Step 5: Ensure no stacks/services are running on the manager (administrative) node

To verify that there are no stacks running on the manager node, run the following commands to list all the services and images —

docker ps
docker images

You can now verify the absence of services and images.

Congratulations!

You’ve successfully completed “Swarming the Dock”. You’ve learned Docker swarm fundamentals commands and concepts to create a cluster of nodes with one manager deployed on Amazon EC2 Instances using a Docker compose file!

Clean up

This command removes the specified stack and all of its services, networks and volumes from the Swarm cluster.

docker stack rm <stack_name>

This command does not remove the Docker images used by the services in the stack. If you want to remove those images, run the following command on each worker node.

Note — This command will remove all images on the Docker host, not just those used by the stack.

docker image rm $(docker image ls -aq)

If you’ve got this far, thanks for reading! I hope it was worthwhile to you.

Ifeanyi Otuonye is a Cloud/DevOps Engineer obsessed with cloud technologies and the DevOps culture. He is motivated by his eagerness to learn and develop and thrives in collaborative environments. He has a background in Information Technology and Project Management and balances the life of being a Professional Athlete. Since the end of 2021, he has strategically embarked on the Cloud/DevOps Engineer journey through self study and just recently, joining the Level Up In Tech program!

--

--

Ifeanyi Otuonye (REXTECH)
Nerd For Tech

Cloud Engineer | DevOps. 5X AWS Certified. Professional Track and Field Athlete.