Visualizing Efficiency: Mastering Docker Swarm -Step by Step :’)

Jesus Egui
10 min readNov 14, 2023

--

Containers in Action: Navigating Challenges with Docker Swarm Visualizer

Your company is new to using containers and has been having issues with containerized workloads failing for their global shipping application. The company is worried about additional potential data loss as they have no way to reschedule containers that are failing. The company has one week to update its infrastructure and leverage the use of Docker Swarm so that data can be restored for the containers that are no longer in a healthy state. Because they are not familiar with Docker Swarm, they will need a step-by-step guide on setting up a Swarm for their containerized workloads to assist with container orchestration.‌

The solution should include the use of a global service since the company’s global shipping application is having issues. Your Swarm should be able to launch the containers using a global service. Remember, a service is simply a group of containers of the same image that facilitates the scaling of applications. Your global shipping application will require the use of at least three networked hosts, which should be AWS EC2 Ubuntu 20.04 instances.

Node Orchestration and Swarm Setup :

Configure Docker Swarm on AWS EC2 instances by installing Docker, verifying the installation, changing hostnames, validating security groups, adding SSH keys, running test containers, creating the Swarm with one master and two worker nodes, deploying a global service for the shipping application, and confirming the Swarm’s healthy status. This step-by-step guide ensures a quick setup for container orchestration, addressing concerns of failing workloads in the global shipping application within a week.

Crucial Terminology:

  • Docker Swarm: Docker Swarm is a Docker orchestration platform that enables the management and scaling of distributed applications across multiple containers.
  • Global Service: A type of service in Docker Swarm that ensures exactly one task of the service is scheduled on each node in the cluster.
  • Node: A node is an individual machine that is part of a Docker Swarm cluster and runs containers.
  • Swarm Initiation: The process of initializing a Docker Swarm on a host, designating it as the Swarm manager.
  • Worker Node: A node in the Docker Swarm cluster that runs application tasks, managed by the manager node.
  • Manager Node: A node in the Docker Swarm cluster that coordinates and manages cluster operations, including service scheduling and worker node management.
  • Docker Swarm Visualizer: A tool that provides a real-time visual representation of the architecture of a Docker Swarm cluster, making it easy to understand services and nodes.
  • Docker Daemon: The Docker daemon is a background process that manages the building, running, and distribution of Docker containers.

This guide assumes minimal familiarity with Docker and provides a comprehensive step-by-step approach to configuring Docker Swarm for the company’s global shipping application.

Command Uses:

Step 1: Installing Docker on all hosts:

curl -fsSL https://get.docker.com -o get-docker.sh 
sh get-docker.sh
  • Downloads and installs Docker on the current host using the script provided by Docker.

Step 2: SSH Configuration File Setup:

  • You configured the SSH configuration file (~/.ssh/config) with information such as the user (User), hostname (Hostname), and the private key path (IdentityFile).

Step 3: Creating Additional Instances and Configuration:

  • You created two additional instances named “Node2” and “Node3.”
  • Configured SSH credentials and the hostname for “AdminNode1.”

Step 4: Initializing the Swarm:

sudo docker swarm init
  • Initializes a node as a manager in the Swarm. Provides a token that will be used to join worker nodes to the Swarm.

Step 5: Installing Docker on Worker Nodes (Node2 and Node3):

sudo apt update
sudo apt install -y docker.io
  • Updates the package list and then installs Docker on the worker nodes.

Step 6: Setting Up the Swarm Visualizer:

sudo git clone https://github.com/dockersamples/docker-swarm-visualizer
sudo apt update
sudo apt install -y python3-pip
sudo pip3 install docker-compose
  • Clones the Swarm visualizer repository, updates the package list, installs pip (Python package manager), and then installs docker-compose.

Step 7: Running the Swarm Visualizer:

sudo docker-compose up -d
  • Uses docker-compose to bring up the Swarm visualizer service in daemon mode (-d).

Step 8: Creating the “SuperViz” Service for the Swarm Visualizer:

sudo docker service create \
--name=SuperViz \
--publish=7091:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer
  • Creates a service named “SuperViz” using the Swarm visualizer image. Publishes port 7091 on the host, restricts the service to the manager node and mounts the Docker socket to access the API.

FOUNDATIONAL:

As the junior engineer, you are responsible for only configuring Docker Swarm and providing your team with a step-by-step guide to setting up the Swarm. You should assume that there are new team members who may not have Docker installed on their devices. Your swarm creation should include the following steps via your remote SSH terminal in VSCode.

Guidelines:

  • Installing docker on all hosts
  • Verifying that docker is in an active state
  • Change the hostname on each node to identify the master node and the two worker nodes
  • Validating the worker nodes all share the same security group as the master node
  • Be sure to add your SSH key to each node
  • Running the necessary containers using the CLI
  • Creating the Swarm using one master node and two worker nodes
  • Show your team members the status of the docker nodes

Let’s start with this project:

We have created an instance with SSH access and configured our security group so as not to have access problems during the project.

  • Installing docker on all hosts
curl -fsSL https://get.docker.com -o get-docker.sh 
sh get-docker.sh

In the user folders, we must choose our user, choose the .ssh option, and create a “config” folder or whatever name we want to add to it, it would look something like this in my case: “C:\Users\13jes\.ssh\config “ highlighting the config that we have created within the .ssh directory

To start our instance with docker from VisualCode we need the PATH where the Keypair that was used when we created the instance is located, in this case, it would be “Docker. pem” Without this we will not be able to start the docker
We open our “config” file in VisualCode we will look for the user which is defined as “ubuntu” automatically because the AMI that I chose is from Ubuntu, the Hostname would be our Adrees IP of the instance and the Host can be any name we want

Now I must create the other two instances and identify them as nodes unlike the main one, (Node2 and Node3) they would be the workers using the same Keypair and the same security group

I am adding the credentials to my “nodes2 and node3” and the hostname must be AdminNode1

We will add our encrypted information from our keypair (Docker. pem)

Now we will create a configuration file called “config” for the private IPs of our instances and we will add a new PATH but we will use the same template as in the first config file

We use the SSH command to enter our Nodes called “Node2” and “Node3” While we also have our admin open you can see that the nodes are in SSH terminals while our admin will always be in “bash”

AdminNode1

To create our swarm we must start it in our Node manager and from there take the token that will be created to be able to use it in the other nodes that we created and they can be joined using the command

"sudo docker swarm join --token "our token"
Node Manager
Node2
Node3

Then I ran these two commands on both nodes (node2 and node3) and thus had docker installed and updated the packages:

sudo apt update
sudo apt install -y docker.io

We are almost finished

For this step, I need to grab a repository from GitHub that contains an image. A visualizer is used to graphically display information about containers, such as their state, resources used, and the general architecture of an application.

sudo git clone https://github.com/dockersamples/docker-swarm-visualizer

In this step, we need to update the package list, install pip (Python package manager), and then install docker-compose.

sudo apt update
sudo apt install -y python3-pip
sudo pip3 install docker-compose
  • Utilize docker-compose up -d to launch the Swarm Visualizer service in daemon mode in the right directory
sudo docker-compose up -d

This command establishes a service named “SuperViz,” using the “dockersamples/visualizer” image. This service is restricted to operate exclusively on nodes with the “manager” role in the Swarm cluster. Additionally, it publishes port 8080 from the container to port 7091TCP on the host and mounts the Docker socket to allow the container to access the Docker state on the host.

This configuration must be the one we have in the security group, otherwise when we run our IP address it will not work and we simply will not find anything, because we will not have access.

We will create several visualizers to be able to see how they look on our page.

sudo docker service create \
--name=SuperViz \
--publish=7091:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer

We go to our instance, we take our IP Address, we copy and paste it in our browser like this “http://IP ADDRESS:8088/” or the port we want from any of our visualizer

Perfect, with this we conclude our Docker-Swarm and visualizers project successfully

Summary and Conclusions:

The project has been completed, achieving an effective configuration of Docker Swarm and container visualization through Swarm Visualizer. The infrastructure is ready to address container failure issues and provides efficient orchestration for the company’s global shipping application

Text me if you want to do this project or if you have any problems with it, I’ll be happy to help you.

Connect with Me on Social Media:

Github: https://github.com/JesusEgui

LinkeIn: www.linkedin.com/in/jesusegui

ADVANCED: Comin soon

As the lead engineer, you’ve reviewed the Swarm configuration and you are now ready to deploy your services on the Swarm. Consider the following steps as you deploy the service:

  • Using only the CLI, SSH into the master node host and run the command to create your service using an Official image of your choice out of the following company-preferred images:
  1. nginx
  2. apache
  3. redis
  4. python/alpine
  5. Ubuntu
  • Initially only launch 1 replica.
  • Run the commands to verify the service has been created and is running
  • Since it is a global application that needs to scale, Using the CLI, run the necessary commands to scale the service to 3 replicas.
  • Verify that the service has scaled

COMPLEX: Coming soon

As the senior engineer, you are looking to make this deployment as seamless as possible, and you’d like to use a more complex approach. Consider the steps for the foundational and advanced sections. Note: You will have to do some research to ensure you have all files required for this deployment.

Deploy your application via a Stack to your Swarm

--

--

Jesus Egui

DevOps Engineer ♾️ | AWS Certified Cloud ☁️| Linux | Python| Docker | Kubernetes | Terraform |Projects+|Level Up In Tech Student