Orchestrating a Swarm: Setting up and getting started with Docker Swarm: Part II

Kristoffer Afflerbaugh
Nerd For Tech
Published in
12 min readJul 20, 2023

In part I of this article we went through the basics of setting up a Docker Swarm. If you haven’t read that article yet, I would encourage you to do so, as we will be picking up where we left off there. Here is the link.

Now that our swarm is set up, we’re going to actually set up some services on our swarm.

Deploy a Service to Swarm

Step 1.)

If you’ve been following along in the last article, you should be connected to your Docker manager node, and then also connected to your two worker nodes via SSH from your manager node. If you have disconnected from them for some reason then SSH into those instances now. Consider using the boto3 script I placed at the end of the last article, to update the config file and start your instances.

Step 2.)

Once you are connected to all 3 instances, your terminal in VS Code should look like this.

You can run a quick sudo docker node ls in your manager node to verify that your swarm is still set up.

Once you’ve verified that the swarm is still set up, let’s move on to launching a service.

Step 3.)

We’re going to be launching a service using an nginx container image. If you needed to find a specific image, you could search for that image on Docker Hub. Since I already know the command to pull an nginx image I will just do it from the command line. The following command will create a service named “nginx” from the “nginx” image.

sudo docker service create --name nginx nginx

The first nginx in the command is to tag the service with the name “nginx”. The last nginx in the command is to use the “nginx” image. Sometimes it can be confusing, but remember that the image to use comes last.

If you see the last line in the above picture you know that it was successful. To verify, use sudo docker service ps nginx and sudo docker service ls.

You can see that we get slightly different information back with these two commands. The ps command, which is just the Docker version of the standard Linux command (for processes), lists tasks of a given service. The ls command lists services. From both of the commands combined you can see that a service is running called “nginx”, and that it has 1 task, running on worker1. So, we just ran a command on our Docker manager node and now a container is up and running on a worker node. How cool is that!?

Step 4.)

I was going to verify that I could reach the nginx web page, but I just realized that I forgot to publish a port. No problem, we can update the port. We’ll use:

sudo docker service update --publish-add 80:80 nginx

Any type of updates to the service is done with the command docker service update, and in fact you can get a complete list of all of the available update commands using:

sudo docker service update --help

You can now go and verify that the nginx server is running by going to the public IP address of your host + 80 (separated by colon). The public IP of your manager node can be retrieved by running curl -4 icanhazip.com in that terminal. Then paste it into your browser and add “:80” at the end.

So I’d enter “3.95.168.55:80” into my browser.

And it looks like nginx is working. Here’s the really cool part. Remember when we ran the service ps command and we saw that the task was actually running on one of our worker nodes? Here’s the thing, it doesn’t matter which IP address you use; it could be any from our swarm. They will all route to this page, even though the container is only running on node2 (worker1 for me).

Furthermore, you can verify that there isn’t actually any containers running on the manager node by running sudo docker container ls. You can also reach the site from the IP of the other worker node if you want. Go ahead and try it; you’ll still be able to reach to nginx page.

Step 5.)

Now that we’ve verified that the nginx page can be reached, we’re going to scale our service up, so that there are 3 replicas, meaning 3 tasks. Since we have 3 nodes, they should each be distributed to a different node. We’ll use the same docker service update command to do so, with a different option specified.

sudo docker service update --replicas 3 nginx

Now if we run our ps and ls commands you can see that we have 3 replicas, and they are each running on a different node. Remember for the ps command you need to include the name of the service at the end of the command. The ls command lists services, so no need to include the name of a service.

You can also see from the above image that there must have been some sort of error with one of the tasks at some point because it says it was shutdown; but you can also see that a new task was fired up on a different node around the same time. This is the beauty of Docker Swarm. The orchestrator helps to manage these types of situations. If something happens to one of the tasks, a new one will be launched to replace it.

Deploy a Stack to Swarm

So now we’ve verified that we can create services on our swarm; let’s move on to something a little more advanced. Now we’ll be creating a stack, which deploys multiple services that work together.

Step 1.)

The file we use to Deploy a stack to Docker Swarm is very similar to the file we would use for Docker Compose (I dive more into Docker Compose in my previous article). There are a few key differences though, which I will talk about along the way. First, create a new directory, I called mine “Wk18_Stack”, but you can call yours whatever you want. Use the command mkdir <directory_name> to create a new directory. We will create our compose file in that directory.

Step 2.)

Now that you have a separate directory, let’s create a file called compose.yaml. This is what you will put into that file:

Let’s talk through this file a little bit. The main part of this file is services. Most of what we include in this file will be a service, and thus indented under that section.

nginx

The first service I’ve included is nginx, which I’ve named “web_server”. We then specify the image to use, which is “nginx:1.24-alpine”. Alpine images are much lighter than the regular images of the same name. Next we specify the host and container port under ports. Then we specify the networks, of which there is only one, called “frontend”. After that we come to the deploy section of the service. This section allows us to define how exactly the individual tasks will be deployed. You can see the I specified the number of replicas to be 3; since we have 3 swarm nodes, by default each task will be deployed to a separate node. And then there is a restart_policy that tells Docker to try to restart the container if it fails.

services:
web_server: # Web server service
image: nginx:1.24-alpine # Use Nginx version 1.24 on Alpine Linux
ports:
- "80:80" # Map host port 80 to container port 80
networks:
- frontend # Connect to the frontend network
deploy:
replicas: 3 # Deploy 3 replicas of the service
restart_policy:
condition: on-failure # Restart the container on failure

Redis

Next we have the “redis” service. Redis is a key-value in-memory data store that is also used as a caching service. You can configure nginx to act as a reverse proxy with Redis caching objects, but for the sake of this project we are simply practicing our Docker skills and will not be diving into configurations like that; for now, we just want to know that we can deploy the stack. If you want to know more about proxy and reverse-proxy configurations, here is a link to a good article explaining how they work, and how they are different from each other.

You notice that the entry for Redis is very similar to nginx in our file, the only difference being the image and ports.

redis:  # Redis service
image: redis:7-alpine # Use Redis version 7 on Alpine Linux
ports:
- "6379:6379" # Map host port 6379 to container port 6379
networks:
- frontend # Connect to the frontend network
deploy:
replicas: 3 # Deploy 3 replicas of the service
restart_policy:
condition: on-failure # Restart the container on failure

By the way, I’m getting these port configurations from the image’s README file in Docker Hub. If you click on a link to one of the supported tags, you will be taken to the image’s Dockerfile in GitHub.

Here is the Dockerfile for the Redis image I am using. At the very bottom you will see a line called EXPOSE where it contains the exposed ports for the image.

MariaDB

Our last service is our database, which is using a MariaDB image. There are a few different aspects to this service. For one, notice our network is “backend” instead of “frontend”. Also, we now have a volumes entry. This section maps the volume in the container, which is the path to the right of the colon /var/lib/mysql, to a persistent volume on our host, which is displayed to the left of the colon db-volume. My last article dives in deep on persistent volumes; if you want further reading you can check out that article here.

In the deploy section of the database we have placement constraints. This is a way of making sure our containers get deployed to specific nodes. You can specify all kinds of things, like whether the node is a manager or worker, or add different kinds of tags and make sure services are only deployed to nodes with matching tags. In our case we want the service only deployed to a manager node.

And the final section, called environment, passes environment variables to the container. These are needed with databases in order to access them. You can find the environment variables to use on the Docker Hub page for MariaDB. If you scroll all the way down you will see “Environment Variables”. Then you just need to assign the variables a value.

database:  # Database service
image: mariadb:10-jammy # Use MariaDB version 10-jammy
networks:
- backend # Connect to the backend network
volumes:
- db-volume:/var/lib/mysql # Mount the "db-volume" volume to /var/lib/mysql in the container
deploy:
placement:
constraints: [node.role == manager] # Deploy the service on a manager node

environment: # Set environment variables for the database container
- MARIADB_USER=kris # Set the MariaDB username to "kris"
- MARIADB_PASSWORD=mypassword # Set the MariaDB password to "mypassword"
- MARIADB_ROOT_PASSWORD=rootpassword # Set the MariaDB root password to "rootpassword"

Networks & Volumes

Ok, we’re almost done with the file. Next we simply need to specify the networks that were included in the services section from above. Then also declare the volume that is used with MariaDB. Note that what you put under volumes will be the volume on your host, not on the container. If multiple services required a database you could actually use the same host volume for multiple services.

networks:
frontend: # Frontend network
backend: # Backend network

volumes:
db-volume: # Docker volume for the database service

Step 3.)

Once your file is complete, save it and return to the terminal. We’ll now be running some commands to deploy the stack to our swarm. Use the following command:

sudo docker stack deploy -c compose.yaml wk18_stack

The -c option specifies the compose file to use, then at the end of the command we give a name for our stack.

You can see that the output in the terminal tells us which services have been created.

We can now use sudo docker stack ls to show us the stack that was created. This doesn’t give us much information, just the name of the stack and how many services it contains.

To get more information you can use sudo docker stack ps wk18_stack.

Now we can see our individual tasks, which nodes they are running on, as well as task ID, image, and current state. Our database is running on the manager node as specified in the compose file, and you can also see that the nginx and redis services each have 3 tasks (1 on each node). The deploy options we specified in our compose file worked!

You can also run a sudo docker node ps on any of your nodes to see the tasks running on that node only.

I’m not sure why the above commands are not actually showing any ports, but if you run sudo docker service ls you will indeed see the port numbers.

And if you go to your IP address + port 80 you should be able to see the default nginx page. (Make sure the security group of your server allows access to port 80)

None of the above commands will work on swarm workers, but you can run sudo docker container ls on the worker nodes to see the containers that are running on that node.

Step 4.)

Ok, that’s it for now. Let’s clean everything up. Back in your manager node run sudo docker stack rm wk18_stack.

This will remove all of the services and networks, but not the volume we created for our database or any of the images used for the stack. To remove the volume run sudo docker volume rm wk18_stack_db-volume.

To remove the images you can run sudo docker images to see a list of the images (notice how you can see how much lighter the alpine images are, remember we used alpine images for redis and nginx).

Then run sudo docker image rm <ImageID, or repository name>. You can enter multiples at once, and if using the ID you only need to input the first 3 characters like so.

Ok, and that’s it for now. Make sure to stop your EC2 instances or you will rack up compute hours and possibly be charged. If you found this article helpful please follow me for more projects. Thanks for reading!

Connect with me!

LinkedIn

GitHub

--

--

Kristoffer Afflerbaugh
Nerd For Tech

DevOps Engineer | AWS Community Builder | CI/CD | AWS | Cloud Engineering | Containerization | Infrastructure as Code | Python | Version Control