Utilizing AWS to Create a Docker Swarm

Donald Kish
Nerd For Tech
Published in
9 min readMay 18, 2023

Howdy friends! In this article, we’ll guide you through the process of creating a Docker swarm using AWS. Docker swarm is a powerful feature of Docker that enables the orchestration and management of multiple Docker containers across a cluster of machines. By creating a Docker swarm, developers can efficiently distribute and scale their applications, ensuring high availability and fault tolerance.

Docker swarm allows you to create a cluster of Docker nodes, with one designated as the manager and others as worker nodes. The manager node acts as the control plane, coordinating the deployment and scaling of containers across the swarm. The worker nodes, on the other hand, execute the containers and distribute the workload.

In this tutorial, we’ll demonstrate how to create a Docker swarm on AWS, consisting of one manager and three worker nodes. We’ll explore the necessary steps, from setting up security groups and launching EC2 instances to installing Docker and initializing swarm mode. Furthermore, we’ll deploy a tiered architecture by creating services based on popular Docker images such as Redis, Apache, and Postgres. Let’s get started and unlock the full potential of Docker Swarm on AWS!

Some key terms:

Docker: a popular platform that allows developers to package, distribute, and run applications in containers

Docker Hub: a popular public repository where developers can share and distribute Docker images

Docker Image: the basis for containers, and they include all the necessary dependencies, configurations, and other resources required to run an application

Containers: a lightweight and portable package containing software, libraries, etc., making it easy to deploy applications on different machines without worrying about the underlying system dependencies

Repositories: A repository contains your project files and their version history

Prerequisite Section

A command line interface

An AWS account

Dockerhub account

Deliverables:

1. Using AWS, create a Docker Swarm that consists of one manager and three worker nodes.

2. Verify the cluster is working by deploying the following tiered architecture:

a. a service based on the Redis docker image with 4 replicas

b. a service based on the Apache docker image with 10 replicas

c. a service based on the Postgres docker image with 1 replica

Steps

Step 1: Creating our Security Group

The first step in this journey is navigating to the EC2 dashboard. We can start by searing EC2 in the search menu if it is not favorited or under recently visited.

We will select Security Groups from the dashboard on the left-hand side.

Next, we will select Create Security Group from the top right and enter a name for our security group > enter a description > select the default VPC or create one as needed. I’ll be using a previously created VPC.

Once those items are completed we will edit the inbound rules.

According to Bret Fisher’s Docker Guide we will need the following ports open:

TCP port 2377 for cluster management & raft sync communications

TCP and UDP port 7946 for “control plane” gossip discovery communication between all nodes

UDP port 4789 for “data plane” VXLAN overlay network traffic

IP Protocol 50 (ESP) if you plan on using an overlay network with the encryption option

Once complete it will be similar to below.

Our outbound rules can remain empty and we do not need to add any tags. Once this page is complete we can select Create Security Group. With that being done our first security group is created.

Before we move on, we will need to run it back. We are going to repeat the previous steps to create a security group for our workers. The main difference will be not having the TCP port 7946 as it is used for communication between manager nodes.

With the security groups created, we can move on to creating our EC2 Instances.

Step 2: Creating our instances

Per our directions, we need to create a Docker Swarm that consists of one manager and three worker nodes. Let’s circle back to the EC2 dashboard we visited earlier and locate the launch instance button.

Hit that big orange button

I’ll give the rundown of creating our instances.

Enter a name for our manager's instance
Select a free tier
Free tier for our instance type
Select a key pair or create one as needed

For our network settings, we will select our manager node security group.

The remaining items can remain as default and we can launch our instance. We are going to repeat this step to launch three worker instances. The main differences are naming the instance Worker, we will select the worker security group, and we can update the number of instances being launched from one to three in the top right hand.

We can see that our instances are running.

Step 3: Installing Docker on our Instances

Alright, team! Next up, we get to bust out the old command line and SSH into each of our instances to install Docker. With hindsight being 20/20 there was likely a faster way to do this such as adding the code to the instance profile before creating it. I will repent later.

As for now, we will open each instance individually > click Connect > select SSH client > copy the example > paste into your CLI. You will need to be in the same directory that your key pair is saved in. Additionally, this step is fastest if you open up separate tabs in your command line for each instance.

sudo yum update -y
#Performs update

sudo yum install docker -y
#Installs Docker

sudo systemctl enable docker
#Enable the Docker service to start automatically at system boot

sudo systemctl start docker
#Starts Docker

sudo docker version
#States the version of Docker

Let’s repeat this step for each of the worker nodes. Remember to keep the tabs open as we will need to input our token once it is created by the master node. Once that lengthy process is complete we can initiate swarm mode!

Step 4: Initiate Swarm Mode

To activate swarm mode we will switch back to our manager node. To access swarm mode we will need elevated privileges. There are two options, we can either change the permissions of what we are trying to access or we can give ourselves administrative privileges. Let’s go ahead and ignore the principle of least privilege and switch to the root account using the following command.

sudo su
#switches to root

Next, we will use the following command to create our swarm and copy the swarm join token.

docker swarm join --token SWMTKN-1-2lme0vp2fyvpv9pcvdvo46kcehbdy3fr67clx44sbwyzulib7f-1ghpcitzs0w164rsyx9yzf37f 10.0.1.233:2377

We will need to paste this token into each of our worker instances.

Let’s double check we can see all of our workers by checking with our manager instance.

docker node ls
Excellent

Awesome job team! We have created our swarm that consists of one manager and three workers.

Photo by Todd Cravens on Unsplash

Step 5: Deploying Our Tiered Architecture

It was requested that we deploy a service based on the Redis Docker image with 4 replicas. So let’s head over to Dockerhub and search for the Redis Docker image. It can be located here. For clarity, a replica refers to an instance or copy of a service running on a node within a swarm cluster. Replicas allow you to scale your services horizontally by creating multiple copies of the same service. In this situation, we are creating four Redis replicas.

To deploy the Redis replicas we will switch to our manager instance and execute the following commands.

docker service create — — name <service_name> — — replicas (number needed) Docker image:<version>

sudo docker service create --replicas 4 --name REDIS_4 redis:7.0.9
# docker service create: creates a new Docker service
# --name <service_name>: specifies the name of the service.
# --replicas: Specifies number of replicas (instances)
# <version>: This part specifies the Docker image to be used for the service

Looks like it was successful but let’s double-check.

docker service ps REDIS_4
#ps command allows you to view containers associated with a service
#In this case REDIS_4

They are all up and running!

Excellent!

Now we need a service based on the Apache Docker image with ten replicas. The Apache image can be located here. We will be using a similar command as before. Just need to change the name, the number of replicas, and the image/version.

sudo docker service create --replicas 10 --name APACHE_10 httpd:2.4.56

Let’s double check they are running.

docker service ps APACHE_10

It’s even easier the second time! Lastly, we have a service based on the Postgres docker image with 1 replica. The image can be located here. This command is unique as it requires a password.

sudo docker service create --replicas 1 --name SQL_1 -e POSTGRES_PASSWORD=password postgres:15.2
#remember to never use password as your password

Let’s double-check that all of our services are currently running.

sudo docker service ls

Just like that we have created our swarm and are running at all full force. This is all about learning and I’m sure there are more effective ways to complete these tasks so I’m going to figure them out and follow up. But as for now, let’s tear down this operation to make sure we don’t get charged.

Step 6: Deconstruct

We are going to use the following command to vacate the swarm.

docker swarm leave --force

Let’s also go ahead and return to our EC2 dashboard > select all instances > terminate. Once that is done we are all set. All I have to say is, a job WHALE DONE! Get it, because Moby Dock is a whale. See y’all next time!

Photo by Jon Eckert on Unsplash

--

--

Donald Kish
Nerd For Tech

𝙱𝚂 𝙲𝚢𝚋𝚎𝚛𝚜𝚎𝚌𝚞𝚛𝚒𝚝𝚢 | 𝙳𝚎𝚟𝙾𝚙𝚜 | 𝙻𝚒𝚗𝚞𝚡 | 𝙰𝚆𝚂 | 𝙿𝚢𝚝𝚑𝚘𝚗 | 𝙳𝚘𝚌𝚔𝚎𝚛 | 𝚃𝚎𝚛𝚛𝚊𝚏𝚘𝚛𝚖 | 𝙻𝚎𝚟𝚎𝚕𝚄𝚙𝙸𝚗𝚃𝚎𝚌𝚑