Use Docker Swarm to Relieve Worries of Data Loss

Jerry Karran
5 min readSep 7, 2023

Today we will help a company that uses containers but is having issues with containerized workloads failing.

Worrying about potential data loss with no way to reschedule led to their interest in updating their infrastructure and leveraging the use of Docker Swarm.

With Docker Swarm, data can be restored for containers that are in an unhealthy state.

We will do the following:
1. Create 3 hosts with Docker pre-installed and using the same security group and key pair.
2. Verify Docker is in an active state.
3. Add ssh key to each node
4. Change the hostnames to identify the master node and the two worker nodes.
5. Create the swarm using one master node and 2 worker nodes.
6. Show the status of the docker nodes

Pre-requisites:
AWS account with required EC2 privileges

Step 1: Launch Instances with Docker

Launch an Ubuntu instance and name it.

Choose the Ubuntu image.

Choose a Key Pair or create a new one.

Choose an existing security group, or create a new one.

Make sure you enable “All traffic” on all ports with the source set to your VPC CIDR.

Under “Advanced details,” add the following “User data” to update linux and install Docker.

#!/bin/bash
# created by Jerry Karran - LUIT-Red-Team-2023

# perform linux update
sudo yum update -y

# install docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

Change the number of instances to 3, then click on “launch instance.”

You can rename your instances to easily differentiate them.

Step 2: Verify Docker is running

Connect to your instance and check to see the version of Docker installed.

docker --version

Update the name.

sudo hostnamectl set-hostname node1

You will need to disconnect and reconnect to see the name change.

Step 3: Add SSH Key

In a new terminal window, navigate to the location of your .pem key pair file, then view it.

vi YOUR_KEY_PAIR_NAME.pem 

Copy the entire contents, then switch back to your ssh terminal and create a new file.

cd .ssh
vi YOUR_KEY_PAIR_NAME_EXACTLY_THE_SAME.pem

Paste your key into the file, then save and exit by pressing the escape key, typing “:wq” and then pressing enter.

Get the exact location of your key file and copy it.

readlink -f YOUR_KEY_PAIR_NAME.pem

Create your config file.

vi config

Set it up. Note the “IdentityFile” is the location we just copied.

Host node2
HostName PrivateIP
User ubuntu
IdentityFile /home/ubuntu/.ssh/YOUR_KEY_PAIR_NAME.pem

Host node3
HostName PrivateIP
User ubuntu
IdentityFile /home/ubuntu/.ssh/YOUR_KEY_PAIR_NAME.pem

To get the “PrivateIP,” go to your instance and copy it.

Update the “PrivateIP” for both instances with the correct values, save, and exit.

If you’re on a Mac, include this command in the terminal to avoid any connection errors.

chmod 400 YOUR_KEY_PAIR_NAME.pem

Step 4: Update Host Names

Connect to node2, change the name, and then exit.

ssh node2
sudo hostnamectl set-hostname node2
exit

Do the same for node3.

ssh node3
sudo hostnamectl set-hostname node3
exit

If you connect to either one again, you will notice the name change is in effect.

Step 5: Create a Swarm

Open 2 more terminal windows that are connected to node1.

SSH into node2 in one window and node3 in the other.

In the node1 window, start the swarm.

sudo docker swarm init

Copy the swarm join command and run it in node2. Add sudo at the front.

sudo docker swarm join --token COPY_THIS_FROM_YOUR_TOKEN-1-3r1ot4ic50q73zdycu86hwmqvp6onm3beinkmtkp8ewve85cxh-1txt233om1gctw4igrutnmfi0 172.31.44.165:2377

Repeat this for node3.

Step 6: Check the Status of the Docker Nodes

Go back to node1 and check the status.

sudo docker node ls

Advanced: Create a Service and Launch Replicas

First, let’s run an alpine image.

sudo docker service create --replicas 1 alpine ping 8.8.8.8

Verify it’s created and working.

sudo docker service ls
sudo docker service ps SERVICE_ID

Now scale up to 3 replicas.

sudo docker service scale SERVICE_ID=3

Verify it’s being scaled correctly.

sudo docker service ls
sudo docker service ps SERVICE_ID

You’ve just created a 3-node Docker Swarm with 1 master mode and 2 worker nodes.

You then initially deployed the Alpine service on the Swarm with 1 replica, updated it to 3 replicas, and verified it scaled up as expected!

Don’t forget to clean up and stop any testing instances.

sudo docker service rm SERVICE_ID

Complex Bonus: Deploy your application via a Stack to your Swarm

Let’s try deploying an application using stacks.

vi deploy_stack_jk.yml

Complete our stack file to run our app with 3 replicas.

version: "3"
services:

myapp:
image: nginx:latest
ports:
- 80:80
deploy:
replicas: 3
restart_policy:
condition: on-failure
sudo docker stack deploy -c deploy_stack_jk.yml swarm_jk

Verify it’s running properly.

sudo docker service ls

Try changing the stack file by changing the number of replicas.


sudo docker stack deploy -c deploy_stack_jk.yml swarm_jk
sudo docker service ls

Like magic, it updates almost immediately.

Don’t forget to clean-up when finished.

docker service rm SERVICE_ID

.

.

.

Thanks for following along.

Thanks for following along.

Feel free to follow me at:
https://medium.com/@jerrykarran

Connect with me on LinkenIn:
https://www.linkedin.com/in/jerry-karran/

--

--