Utilize Docker Swarm via CLI!

Leveraging AWS resources to deploy stacks

Dorian Ferguson
6 min readApr 1, 2024

Integral Concepts

  • Docker: An open-source platform that enables developers to package, distribute, and run applications within isolated, lightweight containers. These containers can run on any operating system.
  • Docker Container: Self-contained units that package software and its dependencies, enabling application deployment across diverse environments.
  • Docker Image: Standalone, executable packages that contain the essentials to run software. This includes the code, runtime, libraries, and dependencies. Images serve as the foundation for containers.
  • Docker Compose: A tool for defining and running applications. It uses YAML configuration files to specify the services, networks, and volumes required for your application.
  • Docker Stack: A collection of services defined in a single Docker Compose file that work together to form an application. It allows you to deploy and manage multi-container Docker applications as a single unit.
  • Replica: A copy of a container that runs concurrently with the original container. Each replica serves the same purpose as the original container, enabling scalability, fault tolerance, and efficient resource utilization in distributed application environments.
  • YAML: A human-readable data serialization language commonly used for configuration files. It uses indentation to represent data structures. It supports key-value pairs, lists, and nested structures.
  • IDE: An Integrated Development Environment (IDE) is a software tool that combines code editing, debugging, and project management features to facilitate software development.

Prerequisites

  • An IDE
    - I use VS code
  • AWS Account
    - I’ll be using EC2 instances for this demo
  • Internet connection
    - Probably have that covered since you’re reading this article
  • Terminal access

Launch The Instances

Docker swarm is divided into different classes of nodes, managers and workers.

  1. Manager Node: This is a node responsible for managing the swarm. It schedules services, maintains the desired state of the swarm, and orchestrates tasks across worker nodes. Manager nodes also handle cluster management tasks such as joining new nodes to the swarm, managing secrets, and distributing updates.
  2. Worker Node: Worker nodes are responsible for running containers that make up the services deployed on the swarm. They execute tasks assigned to them by the manager nodes.

Three nodes are needed for this demo, 1 manager and 2 workers. These nodes (instances) will be identical in configuration. Ensure the following criteria are met when launching your instances

  1. The instances share a key pair
  2. The instances share a security group which has:
    - an inbound rule allowing all traffic from YOUR IP
    - an inbound rule allowing traffic from the VPC in which these instances live ( most likely your default VPC ). If you do not know the VPC cidr, the steps will be listed below. in the meantime, simply have the security group allow traffic from your IP.
  3. paste the following Gists into the userdata field
The sudo chmod command will have to be run manually every subsequent time the node is accessed via SSH.

Launch all three instances naming them in an esily discernible manner. My names are, manager, node1, and node2.

Steps to Find VPC CIDR

Open of any of the three instances.

Click the link to the vpc

Open the vpc

The IPv4 CIDR is what you'll use with an inbound rule for the security group.

Edit Your Config File

Open VS Code.

Click the double arrow icon in the bottom lefthand corner.

At the top, choose connect to host, then configure host

Then choose the file path that resembles: Users/[username]/.ssh/config

Fill out the file with the following information for each instance:

Host [desired host name]
#IPV4 will change with every restart
HostName [instance public IPv4]
User ubuntu
IdentityFile [absolute path to key-pair.pem file]

Here’s my config file, for reference

Ignore the swarm-node1-manger comment above the manager host

Save this file.

SSH Into the Nodes

Once created, open VS code. Use the + icon to split your terminal space into three. Once split, ssh into a different node in each terminal. Make sure to split the terminal before iniating the ssh. If you ssh before splitting the terminal, you’ll end having three terminals for the same instance.

Notice my terminal is zsh by default. I chose the zsh option when splitting the terminal as well.

Once inside the nodes, change the usernmaes with

sudo hostnamectl set-hostname [desired name}

# Example
sudo hostnamectl set-hostname manager

Exit and re-enter the nodes to see the name change.

Swarm Time

Run the following in each node to ensure they are not already part of a swarm:

docker swarm leave --force

In the Manager node, run the following to initiate a swarm:

docker swarm init --advertise-addr [private ipv4 of the manager instance]

The output of this command provides the necessary command to add nodes to the swarm. Copy, paste, and run this command in your worker node’s terminals.

Should you accidentally clear this command or realize you need to add another node, run the following to retrieve the join token:

swarm join--quiet worker

This token will not be a complete command, you will need to retrieve the IP on your own.

Check the status of the nodes by running

docker node ls

Single Replica Services

To create a couple single replica services, let’s use a docker-compose.yml. Create a directory, in the manager node, and cd into it.

mkdir [directory-name] && cd [directory-name]

# Example
mkdir orca && cd orca

Create the docker-compose.yml

vi docker-compose.yml
# This creates the file and opens it via the vi editor.
# Below is a gist with a sample compose file
# To exit and save the file press esc, :w, :q.

Once the docker-compose.yml is created run the following create a stack.

docker stack deploy -c docker-compose.yml [stack-name]

# Example
docker stack deploy -c docker-compose.yml pod

Time for some rapid fire checks!

Status of services:

docker stack services [stack name]

# Example
docker stack services pod

Pass the first 3 characters of a service’s ID to the following command:

docker servise ps [ID]

It tells its current state, desired state, image, and the node it’s running on.

In the photo above, my nginx service is running on worker1. So it should be there when I run the follwoing command in worker1’s terminal.

docker ps -a
my other service happens to be running in my manager node

Let’s make sure the network was created and attached. Run the following in all terminals with services in them

docker network ls

As it stands, only two of 3 nodes are in use right. Let’s add some replicas so no one is left out. In the manager node, run the following:

docker sevice scale [sevice name]=[desired amount]

# Example
docker sevice scale pod_nginx=3

Run the previous checks.

Clean Up

To shut down a Docker stack and remove all associated services, networks, and volumes, run the following in the manager terminal:

docker stack rm [stack name]

# Example

docker stack rm pod

CONGRATUALTIONS!!

You now know how to initialize a Docker Swarm

Find Me

DockerHub
GitHub
Linkedin

--

--