Tom Scott
5 min readApr 20, 2018

A developers stand point on Docker Swarm and Kubernetes

Original article here

As software developer I have recently been diving into two orchestration platforms Docker Swarm and Kubernetes to facilitate the deployment and maintenance of API based systems (following the Microservices architecture). The idea behind me testing the two solutions was to find a way to avoid having separate configuration between local development and testing / production deployment. For this to be achievable however the process needs to be simple for a developer as we generally don’t specialize in system administration or vast devops practices.

This article doesn’t aim to list the Pros and Cons of each orchestration system but the experience in locally developing a Microservice based system and deploying it as Quickly, Easily and reliably as possible. My end goal was to perform the least amount of modifications possible on my project which in turn increases the risk of the local and remote environments (leading to unforeseen differences that can result in downtime and even bugs)

Previously having experience with a PAAS solution (Heroku) I was looking for something that I could have more control on such as rolling updates, secret / config management, load balancing, scaling and easy CI/CD support from code repository tools such as Gitlab. Hence the reason for diving into these two technologies. Both orchestration platforms offer these features mentioned above. I decided to use AWS as a Cloud Platform to perform my investigation.

Let’s start with describing the local Microservice architecture. I had numerous different services which consisted of messaging brokers, API’s and Databases. The project contained Docker images which were all stored on a repository. These could all be locally launched via docker compose (with volume mapping) which allowed me to run “docker compose up” to continue the development of these services in real time whilst editing the code on the host machine.

I decided to first dive into Kubernetes. I created an AWS instance and installed a Kubernetes cluster with Kops. Doing this took some time, around a day to get things configured (longer than expected) however was functional.

My next task was to find a way to convert my Microservice stack into Kubernetes definitions to be deployed on my cluster. At this point I stumbled across Kompose which is an open source docker compose translation library which outputs numerous YAML files ready to be injected into my Kubernetes cluster. This in turn creates deployments and services. Was there any other way to do this? Not at the current time without a manual translation so I decided to continue down this route of transpilation.

There was a learning curve understanding the YAML files however my main worry was that 30 lines of Docker Compose translated to over 400 lines of Kubernetes YAML definitions with additional attributes added to certain deployment / service definitions that I would not have expected and simply did not understand at a first glance. As I deployed these YAML files, the services seemed to start up.

I decided to then deploy the same stack with Docker Swarm. My first decision was to decide whether to use an AWS Cloud Foundation template or manually create the Swarm myself by firing up multiple instances. I decided to manually install the Swarm myself as the Docker Cloud Foundation template runs on a minified version of linux with no control of the host machine. Why did I decide to do this? Because I found out that manually SSH’ing onto one of the instances directly connects you to the Docker Engine which isn’t what I wanted.

I created 4 AWS instances based on a pre made Amazon Machine Image (AMI) that was manually created with a base ubuntu image. This base ubuntu image had the latest version of Docker Engine installed at the time allowing us to initialise a docker swarm!

Initializing the swarm was easy and done via SSH on one of the master nodes (which I picked to be a master node) with the “docker swarm init” command. I could easily then SSH into the other instances to join up the others as a combination of workers and managers. I used the “docker join” command which was provided to me by one of the Managers when initialising the Swarm.

Once this was set up I was simply able to obtain my microservice docker-compose definition and perform a “docker stack deploy -c docker-compose.yml — with-registry-auth myteststack” to take what was in my compose file and orchestrate it across the swarm pulling docker images from public and private repositories! Great. Portainer is a great tool which visually allows you to see what is currently happening across the Swarm across all of your nodes. Perfect for this use case.

My tests between the two platforms told me that I can create a working online environment for my Microservices by just using the Docker Compose config file (no additional work in transpiling my project to something else was needed). The best part was that I already used Docker Compose so no additional configuration or environment change / transpiling was needed.

Swarm technology worked perfectly for me in this scenario which is why I would choose it over Kubernetes (for now). Why? Because I can simply use the same file for local development and deployment! As soon as Kubernetes provides an easier method of deployment and cluster creation for developers without transpiling my docker compose then I will definitely be retrying my scenario to reevaluate this situation.

I hope that this article has brought you some inspiration on my experience with system orchestration. My name is Tom Scott and I work for BCG Platinion in Paris.

Tom Scott

A British Software Engineer living and working in Paris