Making a Splash with Your First Docker Swarm

A Beginner’s Guide to Creating a Docker Swarm with MySQL & WordPress

Melissa (Mel) Foster
Women in Technology
11 min readJun 21, 2023

--

Edited Logos

I come from a management background. I am used to orchestrating all the things to make a retail store come together and run efficiently. Having this business mindset is an asset when transitioning to tech. It helps me with understanding why a business should choose a path for more efficient and affordable ways to operate. However, I was quite intimidated by the idea of creating my first Docker Swarm. I am hoping by the end of this walk-through you and I are feeling confident in our growth and skills.

A little background //

As I discussed in a previous article, Docker is an open platform designed to build, share and run applications. Docker eliminates the repetitive configuration tasks, which allows for efficient and quicker development. Docker achieves this by containerizing applications into Docker Containers.

Docker Container is the compiled software, bundled together with everything it needs to appropriately run.

Docker Image think of it as a snapshot. It’s a record of a Docker container at a specific point.

Docker Swarm is a container orchestration tool which runs the Docker application. Docker Swarm is a group of physical or virtual machines that run the Docker application configured to be join together. All activities will be controlled by a swarm manager, and the rest of the machines joined into the cluster are worker nodes.

A way to look at Docker vs Docker Swarm using the analogy of an Orchestra:
Docker: A musician, who has all they need (instrument, sheet music) to play their part in a symphony.
Docker Swarm: The conductor, who ensures all musicians play in harmony across multiple performances, balancing the sound, and stepping in if a musician cannot perform.

Scenario //

An online publication posts blogs about the hottest updates and
trends in IT. Their reader base is rapidly growing. Creating a need to quickly adapt to their changing customer demands. In a business landscape that is increasingly digital, efficient and flexible web infrastructure is not just a competitive advantage, but a necessity.

Objective //

Provide uninterrupted service availability based off customer demands, create a Docker Swarm to manage all their requirements effectively and affordably.

  • Deploy a 3 node Docker Swarm cluster
  • Create two networks
  • Create MySQL service
  • Create the WordPress Service

To follow along with this project you will need //

  • Access to AWS
  • Visual Studio Code installed on OS
  • Attention to Details

Remember, take your time. Trouble shooting is just part of the process.

Deploying a 3 node Docker Swarm Cluster

Step 1: Create three EC2 Instances //

Let’s begin, by logging into our AWS (Amazon Web Service) account. From your most recent or the search bar navigate to the EC2 Dashboard. Our first steps in creating a Docker Swarm, is to create our EC2s; we will need three.

Select Launch Instance
  • Name your EC2 instance
    Since, we are creating three nodes total, I started with naming my first EC2: node1.
  • Choose Application and OS
    We will be using the free tier eligible, Ubuntu Server 22.04 LTS.
  • Instance type
    Select t2.micro Free tier eligible
  • Key pair
    Create a new key pair with an RSA key pair type and .pem private key file format.
example name: Docker Swarm Note: Make sure to save it to a folder on your OS, or ensure it is in your download folder.
  • Create a Security Group: Select Edit

Your Network Settings section has opened up that you can add custom metrics. We will be working with custom Inbound rules.

  • SSH traffic from your IP address
  • All traffic from Anywhere
  • Add the following Ports:
    2377, 7946, 2376, and 4789
    under Custom TCP sourced from your VPC CIDR Block.
Note: your CIDR Block can be found at the top under your VPC
  • Select Launch Instance once you have all your Ports set up.

Yay, one down! Two more to go! We are going to repeat the process to create node2 & node3, with using the newly created .pem key, and our new security group. After you have completed the steps, you should see your three newly created instances running on your EC2 Dashboard.

Filtered to show Running Instances

Success! If you are ready to keep going, our next steps will be utilizing Visual Studio Code. If you do not have VS Code installed you can refer to this article with great resources to help get you ready for the rest of this project.

Note: If you plan on stopping at this point, make sure to stop your instances. (You can restart them when you are ready to move on.)

Step 2: In Visual Studio Code connect to node1 //

Once VS Code is launched, you will be at a Welcome screen. At the bottom left hand corner, there should be a two arrow icon. Click on the icon and it will open a new remote window.

Click Icon to open remote window
  • Select Connect to Host…
  • Select Configure SSH Hosts…
  • Select the first path: C:\Users\yourusername\.ssh\config

The .ssh configuration file will now open. We will configure it our node1. (I recommend having a tab open with your AWS Account pulled up ready to access node information, or split screen so you have it side-by-side.)

Note: Be aware of case sensitivity, spelling, and punctuation. Even an extra space at the end can cause connectivity issues.

Explanation of what we will be entering
Example of my configuration file

Once everything is updated, file save and close. Navigate back to the lower left hand blue icon and click it once more. Follow the same steps as before, except instead of configuring, you should see node1 to ssh into.

  • Select node1 and a new remote window will open.
  • Select Linux as we are using Ubuntu as our OS.

After you are connected, you should be brought to a Welcome screen. You can see that the lower left hand icon now has SSH: node1. This will help to ensure that you are using the correct screen.

Note: If you haven’t used VS Code before to remotely connect this way you will not have any Recent.
  • Open Terminal

For additional clarity we will reset the Ubuntu users name which currently is showing as ubuntu@ipaddress to contain the name of the node.

  • Run this command
sudo hostnamectl set-hostname node1 

Then close Terminal and open a new one and you should see the change.

Success!

With our new node1 Terminal we can now install Docker:

curl -fsSl https://get.docker.com -o get-docker.sh 
sh get-docker.sh

To verify install, we can run:

sudo docker version
Nice!!

Another way to verify that node1 has Docker installed is to run a simple hello world test.

sudo docker run hello-world
Love seeing that little Hello!

Our final step for node1 is to add an SSH key. In your node1 Terminal run

sudo -s

We are now operating as the root user, and have access to our /root drive to access our .shh folder. Inside of our .ssh folder we will create two files to set us up for success with our node2 & node3.

cd /root
cd .ssh

We need to create a document using the vim editor and name it exactly what our keypair is. Running the command below will open the vim editor and name the file.

vim DockerSwarm.pem

Here we will hit “I” to insert our entire .pem file.

The easiest way is to copy your keypair into vim editor:

  • locate it on your OS
  • Open the .pem file using a notepad
  • Copy all text
  • Navigate back to the open vim script and paste.
  • Once pasted hit the ESC key and :wq to save and exit.

Next we will create a vim file calling it config with the following info from your EC2 node2 & node3.

Host node2
HostName PrivateIP
User ubuntu
IdentityFile /root/.ssh/yourkey.pem

Host node3
HostName PrivateIP
User ubuntu
IdentityFile /root/.ssh/yourkey.pem
  • Once entered hit the ESC key and :wq to save and exit.

To make our key pair executable and not visible to the public run:

chmod 400 DockerSwarm.pem

I know that seemed like a lot of steps! You did great!! If you need to pause here and come back, I get it. Learning a new skill takes time. Just remember to stop any running EC2 Instances.

Step 3: In Visual Studio Code Configuring node2 & node3 //

In our node1 Terminal under our .ssh directory (as root), we can to ssh into node2 to install Docker.

ssh node2

Nice, you can see that we are still in node1 in the lower left hand corner, but our ubuntu is now @our node2ipaddress. We can now update our name and install Docker. After Docker is installed, close the Terminal.

Open a new one and ssh back into node2 to confirm updated name and Docker install. Great job!! Using the Split Screen Option under Terminal, you can repeat the process ssh into node3.

Success!!!

It has to put a smile on your face that you made it this far! I know it seems a bit confusing. The more you practice the more it makes sense. Onward to joining the nodes together and creating our swarm!!

Step 4: Joining the nodes to create our swarm //

Woohoo! It’s finally time to create our swarm. Open up a new Terminal for node1. We want to set up node1 as our manager by running:

sudo docker swarm init
Success! node1 is now a manager!!

Docker has provided us the instructions in the screen shot above. To add node2 as a worker, we need to sudo ssh into node2.

  • Once in, type sudo then copy and paste the docker swarm join command in full.
Note: my first attempt to paste in the docker swarm join — token command I forgot to add sudo. Once adding sudo in front of the entire command it ran perfectly and node2 is now added as a worker to the swarm.
  • Repeat the steps for node3
node3 is now a Worker!
  • Verify our Docker nodes
sudo docker node ls
Success!! Our 3node Docker Swarm is now deployed!!

Wow!! You deserve a high five!! Again, if you are going to pause this walk-through make sure to stop your EC2s. If you are ready to move on with me, stand-up do a quick little stretch, take a deep breath and let’s move on to our second objective.

Creating Two Networks //

You still with me? Let’s continue to set you up for success! With your Docker Swarm running we can knock out the next objective of creating our two networks, with just two commands!

Working on node1 run this command to create a network overlay for our Front End:

sudo docker network create -d overlay frontend

And repeat with this command to create our Back End:

sudo docker network create -d overlay backend

We can verify our networks by running:

sudo docker network ls

Awesome! Next, we will be able to create our MySQL service & WordPress service using the networks we just created!

Creating Services //

Continuing in node1, we will create our MySQL service and associate it with our backend network. Our goal is to have 1 replica, and utilizing Docker Volumes to make data persistent across container restarts.

sudo docker service create --name mysql --network backend --replicas 1 --mount type=volume,source=mysql_data,destination=/var/lib/mysql -e MYSQL_ROOT_PASSWORD=melf -e MYSQL_DATABASE=wordpress mysql:latest

Nice! We can do a quick verify if you are like me and want to be sure it was truly created.

sudo docker service ls

Let’s create our WordPress service. Our goal here is to associate it with our frontend network. We will configure to use port 80, making it accessible from the browser. Our final goal is to associate it with MySQL and have 3 replicas.

sudo docker service create --network frontend --name wordpress --replicas 3 -p 80:80 --mount type=volume,source=wordpress-data,target=/var/www/html wordpress:latest

To verify all services we created let’s run the sudo docker service ls command once again.

Now, we should be able to view WordPress on our browser. To do so we will need our <instance public IP:80>.

Select Language Preference

Once you have selected your language preference, you should see the WordPress Welcome Page!! High five!! You did great!

Success!!
Adobe Stock Image

Clean Up Time //

I added this section incase you ever need to remove networks or services you created.

To remove our services:

sudo docker service rm mysql wordpress

To remove our networks:

sudo docker network rm frontend backend
Woo-hoo!!
Adobe Stock Image

Loads of steps but you did an awesome completing a foundational walk-through of creating your first Docker Swarm, along with creating services configured to networks. Remember to trouble shoot any errors you receive along the way and partner with a friend!

Tip//

  • Stop/Terminate any EC2’s you no longer need for Demo or Practice

--

--

Melissa (Mel) Foster
Women in Technology

𝔻𝕖𝕧𝕆𝕡𝕤 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿 |𝒲𝑜𝓂𝑒𝓃 𝐼𝓃 𝒯𝑒𝒸𝒽 𝒜𝒹𝓋𝑜𝒸𝒶𝓉𝑒 | 𝚂𝚘𝚌𝚒𝚊𝚕 𝙼𝚎𝚍𝚒𝚊 𝙲𝚛𝚎𝚊𝚝𝚘𝚛 | Photographer