Creating & Initializing a Docker Swarm

Donica Briggs
8 min readAug 9, 2023

--

In this project using Visual Studio Code (VS Code) I set up a 3-node Docker Swarm, consisting of 1 Manager Node and 2 Worker Nodes. After initializing the swarm, I created a service using an Official Ubuntu image, scaling the application up and creating several replicas.

Before we get started, lets take a quick look at some key applications and features used:

  • An EC2 instance is Amazon’s Elastic Compute Cloud used for running applications on the AWS infrastructure. It is a virtual server that enables application programs in the computing environment to run; it can operate as an unlimited set of Virtual Machines (VMs).
  • Visual Studio Code is a source code editor designed for the everyday use of developers. It supports customization and offers shortcuts to save time and improve productivity.
  • Docker is an open-source platform that automates the deployment, scaling, and management of applications. It is a platform that enables developers to create, deploy and run applications anywhere, on any device.
  • A Docker node is an instance of the Docker engine participating in a swarm.
  • A Docker swarm is a group of physical or virtual machines running a Docker application in a cluster.

Okay, lets get started!

Step 1: Change Manager Node Host Name

Using Visual Studio Code as the Integrated Development Environment (IDE) on my Mac, I ran the command “sudo hostnamectl set-hostname managernode” to change the hostname of a previously created node.

I then ran “exit” to reboot my terminal and reflect the new hostname “managernode.”

Step 2: Update Manager Node Security Group

In AWS, on the EC2 instance attached to my “managernode,” I clicked the “Instance ID.”

Once inside the “Instance Summary,” I was able to see details such as the “VPC ID.

Every machine, server and end-user device that connects to the Internet has a unique number called an Internet Protocol (IP) address associated with it. A Classless Inter-Domain Routing (CIDR) is an IP address allocation method used to improve data routing efficiency on the Internet. Selecting the “VPC ID,” I was able to view the “IPv4 CIDR” attached to the VPC.

Navigating to “Security Groups” on the left-hand side, I confirmed that the Security Group of my “managernode” VPC was “DockerSG.”

After selecting the Security Group “Docker SG,” on the top right-hand side I clicked “Actions,” and then “Edit Inbound Rules.” Here, I selected “Add Rule.” I then selected “All TCP,” as my “Type,” “Custom” as my “Source” and copied and pasted the “IPv4 CIDR” of my VPC.

Returning back to my “managernode” instance, I then scrolled down, selected the “Tags” tab and updated my node “Value.”

Step 3: Create Worker Nodes

Next, I launched 2 additional EC2 instances, naming one “workernode2” and the other “workernode3.” For both I selected “Ubuntu Server 22.04 LTS” as my Amazon Machine Image (AMI).

I then selected “t2.micro” as my free-tier instance type, and “DockerKeyPair,” the same Key Pair used by my “managernode” instance.

In the “Network Settings,” section I selected “Select Existing Security Group,” choosing “DockerSG,” the same Security Group used by my “managernode.”

Under “Advanced Details,” and inside the “User Data” section, I inserted the below bash script to automatically execute the install of Docker inside each instance.

After selecting “Launch Instance,” I was able to confirm that both of my “workernode2” and “workernode3” instances were successfully created.

Step 4: Create an SSH Configuration File

Returning to my VS Code terminal, I began to build the SSH (Secure Shell) configuration file. Organized by stanza, a configuration file enables the automated SSH connection to servers using pre-configured commands.

From my “managernode,” I first ran “cd .ssh” to access the SSH directory. Once inside, I ran “touch config” to create a file named “config.” I then ran the command “ls” to list contents of the directory, verifying the file was successfully created. I then ran “vim config” to open and edit the file using the VIM text editor.

My configuration file required specific data such as the “Host,” “HostName,” “User” and “IdentityFile.”

Host
HostName
User
IdentityFile

Inside the configuration file I input the instance names as the “Host” and “Ubuntu” as the “User.” In order to retrieve the “HostName,” I returned back to AWS, and on the “Details” tab of my “workernode2” and “workernode3” instances I located their “Private IPv4 Addresses,” copying and pasting them into my configuration file.

For the “IdentityFile,” I created a Key Pair file by running “touch DockerKeyPair.pem.” I then navigated to the location of my Key Pair file on my hard drive, opening it in a text editor (I have a Mac, so I used TextEdit). I then copied the contents in its entirety.

After running “vim DockerKeyPair.pem” to open and edit the file, I then pasted the text.

I ran the command “chmod 400 DockerKeyPair.pem” to adjust the file permissions to “read-only.

chmod 400 DockerKeyPair.pem

I then ran the command “readlink -f DockerKeyPair.pem” to retrieve the file path of my Key Pair file.

After receiving the file path “/home/ubuntu/.ssh/DockerKeyPair.pem,” I copied and pasted it into my configuration file as the “IdentityFile” for both nodes.

Step 5: Test Node Connectivity

Inside my VS Code terminal, I ran “cd ~” to return to the home directory. I then attempted to connect with the first node by running “ssh workernode2.” I recieved the below authentication fingerprint message, confirming connectivity.

Noting the host name change from “managernode” to the IP Address of “workernode2," I ran the command “sudo hostnamectl set-hostname workernode2” and then “exit” to update and indicate what node I was in.

Opening up an additional terminal window, I then connected to “workernode3.” Next, I ran the command “sudo docker version” on all 3 nodes to confirm Docker was properly installed and running.

Step 6: Initiate the Swarm

In order to activate the swarm, inside the “managernode” terminal, I copied and pasted the Private IP Address of the “managernode”into the command “sudo docker swarm init — advertise-addr <PrivateIP>.” This identified a specific node as the Manager Node of the Docker Swarm.

After receiving a message confirming “managernode” was the Manager, the next output from the swarm initiation command was a token; this enabled “workernode2” and “workernode3” to join the swarm. I simply copied, pasted and ran the below token swarm command.

Once each node was successfully joined to the swarm, I received a message that read “This node joined a swarm as a Worker.

To confirm availability and the status of each node, I ran the command “sudo docker node ls.” Here I was able to see that all 3 nodes were active, and the Manager Node was assigned “Leader” status.

Step 7: Create a Service, Replicate & Scale It Upwards

Inside the Manager Node, I ran the below command to deploy a service using an official Ubuntu image, creating 3 replicas.

sudo docker service create --replicas 3 --name hello_world ubuntu ping docker.com

I then ran the command “sudo docker service ls” to validate if the service was up and running.

Next, running the command “sudo docker service inspect — pretty hello_world,” I was able to view details of the service in a human-readable format.

I then scaled the service upwards creating 4 additional replicas by running the command “sudo docker service scale hello_world=4.

To view what was running on the backend of each replica, I ran the command “sudo docker service ps hello_world.”

If you enjoyed this article or found it useful, please follow, like and share.

Thanks for joining me, and stay tuned for the next project!

--

--

Donica Briggs

Air Force veteran on a journey in cloud computing, design and research. Passionate about exploring the intersection of security and the cloud.