Docker Swarm for Beginners: How to Create a Powerful Cluster on AWS EC2

Angelomarchaidez
6 min readMay 4, 2023

--

Welcome to this Docker Swarm tutorial, where we’ll show you how to create a swarm with one manager node and three worker nodes. In this tutorial, we’ll use Docker Swarm to host three popular services: Redis, Apache server, and a PostgreSQL database. All of this will be hosted by AWS EC2 instances. By the end of this tutorial, you’ll have a fully functional Docker Swarm setup that you can use for your own projects or to learn more about container orchestration with Docker. Let’s get started!

Here is what the tutorial will cover, we will create a Docker swarm using AWS EC2 instances as our host machines. The Swarm consists of one manager and three worker nodes. We will then complete the following:

CREATE

  • a service based on the Redis docker image with 4 replicas
  • a service based on the Apache docker image with 10 replicas
  • a service based on the PostgreSQL docker image with 1 replica

Let’s get started first with creating our template for our EC2 instances, I am cutting corner here and suggest for production you should have two templates one for the Manager node and one for the worker nodes. We will use one template since our EC2 instances will only be torn down as soon as the project is complete.

Create our template

Let’s create our template. Important portion here is to attach a security group with the correct inbound rules for it’s ports. Also don’t forget to select select “Enable” from the drop down of “Auto assign public IP”.

Overview of our template

Here is my security groups inbound rules. This part took me long as I did not want to give full access to all inbound traffic. Again in a production stage we should have two security groups one for the worker nodes and one for the manager node.

Security group inbound rules

And finally here is the script to run on creation. I will use the #!/bin/bash to run the script in bash shell. Select the “create template” to complete the template. This will automatically update our EC2 instance and install docker as well as allowing our ec2-user to run root privileges.

#!/bin/bash 

sudo yum update
sudo yum install docker -y
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo chkconfig docker on
docker --version

Now we can create four EC2 instances, one will be the manager node and the other three will be the worker nodes.

Let’s launch the four instances from the template and we will label each one.

Launch our instances

Review and select the “launch instance”

Overview of our template

Let’s launch and label each instance.

EC2 dashboard

Now lets SSH into the manager node and create the manager role by initializing our docker swarm with:

docker swarm init 

Let’s SSH into all four EC2 instances. Now we will grant the manager node the management role.

SSH into EC2

Now we can initialize our swarm manager node.

Swarm init

Now we will copy the command to create our worker swarm. I will only post one screenshot however this process was also done for the other three worker nodes.

Join swarm from worker node

Now we will check and see if our nodes are working and if so lets create a service. 1) We see all of the nodes listed and 2) we create a service using Redis database cache. Finally we see 3) that our service is currently running with the four replicas.

create services

Now we will add services for Apache server with ten replicas and one replica for PostgreSQL.

create services

I noticed though however that our PostgreSQL is not running. If we run the “docker inspect” command we can see that it did not initialize.

JSON of PostgreSQL

Let’s head over to the official PostgreSQL image site on docker hub.

For running PostgreSQL we have to supply it an 1) environment (-e) variable followed by the 2) actual variable.

Docker hub official PostgreSQL

Let’s terminate the service and rerun it.

Terminate service and rerun

OK now lets check to see if our Apache server is working. You will have to make sure that the you modify https protocol to use http instead.

public IP of our Manager node

You will be warned that the site is not secure however continue to site.

Apache server is up

Now we can tear everything down and and stop our instances.

stop our services

We can stop our instances and we can return to them for future projects.

EC2 dashboard

I also made a lambda script to run to turn off all instances in case I forget and get a charge from AWS. We have to also allow this script to run longer than the appointed 3 seconds, I changed it to 1 minute run time. I also created a cron job to run on Fridays at 10 pm using AWS eventbridge.

Script for stopping all instances in all regions

Here is the scope of the job. I changed it for today to test and make sure it works.

I changed the date to Wednesday (today) to test it, once complete I will adjust the time to execute.

Let’s head over to cloudwatch, and view events to see if this worked.

cloudwatch logs of our lambda python script
Cloudwatch logs

Check back on our EC2 dashboard.

Lambda successfully triggered by cron job

Thank you for tuning in I appreciate the read and as always.

--

--

Angelomarchaidez

Love learning about technology. Experience with Java, Python , BASH scripting, C/C++, FORTRAN 90 (I'm old). AWS DevOps Engineer, terraform associate and Linux.