Building a Scalable Infrastructure with AWS and Docker Swarm

Terminals & Coffee
8 min readMay 2, 2023

--

Deploying Redis, Apache, and Postgres in a Tiered Architecture

https://www.educba.com/docker-swarm-architecture/

We are back with another week of building out our DevOps portfolio. This week we are challenged with the following:

Using AWS, create a Docker Swarm that consists of one manager and three worker nodes.

Verify the cluster is working by deploying the following tiered architecture:

  • a service based on the Redis docker image with 4 replicas
  • a service based on the Apache docker image with 10 replicas
  • a service based on the Postgres docker image with 1 replica

The Redis service provides in-memory data storage and caching, the Apache service serves web pages and runs applications, and the Postgres service provides a relational database for storing application data.

Last week we got to learn some of the foundations of Docker, and this week we are diving into more advanced topics within Docker.

So, what exactly is Docker Swarm? A Docker Swarm is a container orchestration tool running the Docker application, similar to Kubernetes.

https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/

This ‘cheat sheet’ is a great resource for explaining Docker commands with definitions.

Let’s swarm into it! From our AWS home console, let us select EC2 and click the launch button.

Choose an Amazon Machine Image (AMI) that supports Docker, such as Amazon Linux 2, as I have done for all my past projects.

Keep the instance type as a t2.micro. Then create a new Key pair, because we will be SSHing into the manager instance.

In Network settings, select edit, then add security group rules. Create a new security group that allows incoming traffic on the ports used by Docker Swarm, for source we will allow all IP addresses. (0.0.0.0/0). You can refer to Bret Fisher's cheat sheet. ⤵️

Note that you can also create the SG’s beforehand in the VPC console and then assign the SGs while creating the EC2 instances.

We also want to enable a public IP, at least for the manager, since we want to SSH into it from outside the network.

Swarm Manager Inbound Rules
Swarm Workers Inbound Rules

We will create one EC2 instance for the swarm manager. Then create three EC2 instances at once when we create the instances for the swarm workers.

Before clicking launch, we can add a bash script to bootstrap to the instances.

This script updates the instances. Installs and starts docker, adds the ec2-user to the docker group allows us to run the commands we need. Then configures Docker to start automatically when the instance is launched, and installs docker-compose. I decided to add the last line that initializes the Swarm using the internal IP address of the manager node.

Note: You are going to want to remove line 8 if you are going to bootstrap this script for the worker instances.

Now we can repeat the same steps, we just have to be sure to edit ‘Number of instances’ to 3 and remove the last line of the script.

Once that is complete, we can review our instances in the EC2 console.

😎

Now we are able to SSH into the manager instance. Select the Manager Node > Connect > collect the ssh command under SSH Client.

Be sure to CD into the directory your PEM key is located.

Our first command will be:

docker swarm join-token work -q 

This provides us with the token we will need to provide to our worker nodes.

Next, we will SSH into all three worker nodes at the same time and run the following command.

docker swarm join --token <token> <manager-node-IP>:2377
Repeat for all three worker nodes.

We can verify that all nodes are part of the Swarm by running the following command on the manager node:

docker node ls 

Now that our Swarm shows that it is up and running, we will need to verify the cluster is working by deploying the tiered architecture

We can do this by creating three services based on the Redis, Apache, and Postgres Docker images, per our instructions. Remember, we can use our docker command cheat sheet that was referenced in the beginning of the blog.

To create the Redis service with four replicas, run the following command:

docker service create --name redis --replicas 4 redis:latest

Apache

docker service create --name apache --replicas 10 httpd

Postgress

docker service create --name postgres --replicas 1 postgress

Note that you can name the service whatever you wish.

My initial thought was to deploy one service per worker node however I received this error on the first worker.

Simple enough, back to the manager to deploy all three.

All went well, except for one… hmm…🧐 🤔

I waited a few minutes to see if the Postgres service would ‘self heal’ and it did not.

Ahh yes, My favorite troubleshooting step: ‘unplugging it and plugging it back in.’ I ran a command to remove the service and added it right back.

I googled around for a different command thinking there might have been wrong with my previous one.

That completes the foundational part of this week's project! To summarize, we used AWS to create a Docker Swarm cluster with one manager node and three worker nodes. We then verified that the cluster was working by deploying a tiered architecture consisting of three services: Redis with 4 replicas, Apache with 10 replicas, and Postgres with 1 replica.

By deploying these services in a Swarm cluster, we can take advantage of Docker’s built-in load balancing and failover mechanisms to ensure that our applications are highly available and scalable.

Looking for more of a challenge? Let’s attempt the next phase of this project, which is to complete the same project but this time by creating a Docker Stack. We will also need to ensure no stacks run on the Manager (administrative) node.

Since I am still logged into all the nodes, we will begin by removing all the services at once because we are essentially starting all over.

Let’s run the following command on the manager node.

docker service rm $(docker service ls -q)

Before we go any further, let’s discuss what is a Docker Stack.

‘Stacks allow for multiple services, which are containers distributed across a swarm, to be deployed and grouped logically.’ —

Essentially, what we are doing is building out a Docker Compose file and a YAML file.

Go ahead and make a new directory, CD into the directory, and log into the editor of your choice.

If you refer to the link above, they provide an example of how your YAML build-out should be. You can also refer to my build-out here:

Make sure you save it as a yml file extension, then we can run the following command to deploy the stack.

docker stack deploy --compose-file docker-compose.yml <name of stack>

Nice and smooth on the first try! 😎

We can also use the following command to list our stack.

docker stack ls  

And now to our final step! We need to make sure no stacks are running on the manager node.

Outside of the constraints in the yaml file (node.role != manager), we can run the following command

docker stack ps <stack name>

Review all nodes and names and we can confirm not one is running on the manager node! It’s optional but if you want to tear everything down feel free to refer to the docker swarm cheat sheet that was referenced at the beginning of this lab.

That completes our advanced portion of this week's project! I hope you enjoyed this blog and can take away something new as I have.

If you found this insightful, please give me a follow and connect with me on LinkedIn: https://www.linkedin.com/in/rgmartinez-cloud

PS: Now if you really want to challenge yourself, build a Kubernetes cluster. Modify the Docker Stack so it runs on this cluster with no errors. 🤯😨

--

--