Tiding Up Your Software

Fijar Lazuardy
Inside PPL B7
Published in
5 min readApr 27, 2020

Do you ever build a pretty big software with, let’s say, an API service that handle all business logic, Database, Firebase account manager, and UI service. How do you organize those services? Well there are many possibility to do it, but one of the most popular way nowadays to do it is by making container orchestration.

In just a few short years, containers have dramatically changed the way software organizations build, ship, and maintain applications.

Container platforms, led by the seemingly ubiquitous Docker, are now being used to package applications so that they can access a specific set of resources on a physical or virtual host’s operating system. In microservice architectures, applications are further broken up into in various discrete services that are each packaged in a separate container. The benefit, especially for organizations that adhere to continuous integration and continuous delivery (CI/CD) practices, is that containers are scalable and ephemeral — instances of applications or services, hosted in containers, come and go as demanded by need.

But scalability is an operational challenge.

If you have ten containers and four applications, it’s not that difficult to manage the deployment and maintenance of your containers. If, on the other hand, you have 1,000 containers and 400 services, management gets much more complicated. When you’re operating at scale, container orchestration — automating the deployment, management, scaling, networking, and availability of your containers — becomes essential.

So, what is container orchestration?

Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks:

  • Provisioning and deployment of containers
  • Redundancy and availability of containers
  • Scaling up or removing containers to spread application load evenly across host infrastructure
  • Movement of containers from one host to another if there is a shortage of resources in a host, or if a host dies
  • Allocation of resources between containers
  • External exposure of services running in a container with the outside world
  • Load balancing of service discovery between containers
  • Health monitoring of containers and hosts
  • Configuration of an application in relation to the containers running it

How does container orchestration work?

When you use a container orchestration tool, like Kubernetes or Docker Swarm , you typically describe the configuration of your application in a YAML or JSON file, depending on the orchestration tool. These configurations files (for example, docker-compose.yml) are where you tell the orchestration tool where to gather container images (for example, from Docker Hub), how to establish networking between containers, how to mount storage volumes, and where to store logs for that container. Typically, teams will branch and version control these configuration files so they can deploy the same applications across different development and testing environments before deploying them to production clusters.

Containers are deployed onto hosts, usually in replicated groups. When it’s time to deploy a new container into a cluster, the container orchestration tool schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints (for example, CPU or memory availability). You can even place containers according to labels or metadata, or according to their proximity in relation to other hosts — all kinds of constraints can be used.

Once the container is running on the host, the orchestration tool manages its lifecycle according to the specifications you laid out in the container’s definition file (for example, its Dockerfile).

The beauty of container orchestration tools is that you can use them in any environment in which you can run containers. And containers are supported in just about any kind of environment these days, from traditional on-premise servers to public cloud instances running in Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Additionally, most container orchestration tools are built with Docker containers in mind.

Docker Swarm

Even though Docker has fully embraced Kubernetes as the container orchestration engine of choice, the company still offers Swarm, its own fully integrated container orchestration tool. Slightly less extensible and complex than Kubernetes, it’s a good choice for Docker enthusiasts who want an easier and faster path to container deployments. In fact, Docker bundles both Swarm and Kubernetes in its enterprise edition in hopes of making them complementary tools.

The main architecture components of Swarm include:

Swarm. Like a cluster in Kubernetes, a swarm is a set of nodes with at least one master node and several worker nodes that can be virtual or physical machines.

Service. A service is the tasks a manager or agent nodes must perform on the swarm, as defined by a swarm administrator. A service defines which container images the swarm should use and which commands the swarm will run in each container. A service in this context is analogous to a microservice; for example, it’s where you’d define configuration parameters for an nginx web server running in your swarm. You also define parameters for replicas in the service definition.

Manager node. When you deploy an application into a swarm, the manager node provides several functions: it delivers work (in the form of tasks) to worker nodes, and it also manages the state of the swarm to which it belongs. The manager node can run the same services worker nodes do, but you can also configure them to only run manager node-related services.

Worker nodes. These nodes run tasks distributed by the manager node in the swarm. Each worker node runs an agent that reports back to the master node about the state of the tasks assigned to it, so the manager node can keep track of services and tasks running in the swarm.

Task. Tasks are Docker containers that execute the commands you defined in the service. Manager nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another worker. If the task fails in a replica set, the manager will assign a new version of that task to another available node in the swarm.

The illustration of Docker orchestration

Unfortunately, on our project for PPL course, we aren’t be able to implement this concept entirely. Firstly because we are building a mobile app that instead of running as docker container, we are running on a personal machine (smartphone). Second, we are provided APIs fully by our client, and we were told to just use it without knowing how they implement it. So we don’t actually have our own take on this kind of software architecture, but it does not rule out the possibility to implement this kind of architecture in our future project. Cheers!

--

--