Orchestration in a Nutshell

Pedro Teixeira
Ubiwhere
Published in
7 min readAug 23, 2022
Pedro Teixeira, Telecom Engineer and Tech Lead at Ubiwhere.

Computer systems are, by definition, very complex. We have come a long way from solving simple mathematical problems. Computers are now expected to solve highly complex problems requiring multiple applications deployed in heterogeneous machines worldwide.

This means that solving many tasks today requires coordinating applications and their configurations, of computational units and of the networking required to bring together all these computers.

In addition, many new applications power use cases where their availability and overall performance for the user are critical. Nobody wants to wait several seconds to shop on their favourite site — computational systems are expected more than ever to always be available, with minimum latency. This requires the deployment of applications to be different from what they were years ago. Applications should now be self-healing and self-adapting (scalable) to the resources they have and the requests they receive.

Figure 1: How to coordinate, monitor and manage each part of the computational systems?

To coordinate all these systems, the concept of orchestration is powerful enough. Just like in an internal combustion engine, where the position of each valve and piston must be coordinated at each moment — and automated, without the driver’s input — in computational systems, orchestration, the automated configuration, management and coordination of each step and module, to achieve a specific task is not only a good idea but a crucial necessity.

By using orchestration tools, businesses can solve complex problems and tasks in an automated, coherent, more resilient and scalable way.

Orchestration in computational systems can be considered at 3 essential levels: applicational, infrastructure, and network. A meta level of orchestration can be regarded as — the coordination of orchestrators — which can be achieved using CI/CD mechanisms, such as pipelines.

While at different levels, all the techniques and tools must be able to overcome the main challenges:

● Computational systems are heterogeneous by nature.

● To achieve a specific task, many steps that require different applications, computational infrastructure, network devices and configurations must be leveraged and sequenced.

● Any solution should be as automated as possible while being monitorable by engineers.

1. Applicational orchestration

With the growing complexity of applications, it is critical to ensure they work everywhere, avoiding the cliché “it works on my machine”. To solve this, applications are packaged within containers that comprise all the information needed for the application to run in any given environment. To avoid vast containers, with every dependency including an Operating System (OS), containers run on top of a specific container engine (such as Linux Containers), allowing them to reuse the OS main features while being self-sufficient and portable.

Figure 2: Change of paradigm — from manually recreating the application environment towards containers

The most known initiative is the Open Container Initiative (OCI), from which Docker is the most known solution. The Docker workflow is known by many developers: develop an app, create a file defining the dependencies of the container (the dockerfile) and run a set of commands (docker build, docker run, docker-compose up) to deploy the application as a containerized app.

The problem that standalone Docker — or alternatives such as podman — does not mitigate is the automated coordination of containers:

- If one container fails, how to ensure another one is deployed? — self-heal

- How to automatic deploy more containers if some of the already deployed containers are too in demand/have too much load on them? — auto-scale

- How to manage containers if they are deployed in multiple machines? — scheduling within a cluster

Solutions like Kubernetes, a container orchestrator, aim to provide answers for these challenges by automatically managing where and how many containers (Pods in Kubernetes nomenclature) are deployed. Kubernetes can auto-scale the number of Pods by metrics such as CPU or memory consumption or create new Pods if a node fails, creating a system that self-heals and auto-scale to the needs at each moment.

Figure 3: From manual to Kubernetes-based container orchestration

The modern infrastructure consists of several computational systems — nodes — that can be homogenous or not in specifications — but will possess powerful hardware. To maximize the efficiency of the hardware, each machine should have several applications and OSses installed on them simultaneously. Together, all nodes are part of a cluster of machines. Applications and their developers do not choose which machine they want to deploy to; the cluster manager — such as Kubernetes — decides and schedules the best node to run the container.

2. Infrastructure orchestration

Applicational orchestrators such as Kubernetes consider the infrastructure — the bare-metal machines where it runs — as already running and configured. This is a necessary abstraction, but the infrastructure orchestration issue remains:

- Having a brand new machine, how to automatically configure it with OSes, network configurations, files and necessary programs (such as Kubernetes or monitoring stacks) — zero-touch provisioning.

- Having a cluster of machines, how to automatically monitor and configure them if one or more nodes fail fault tolerance.

In addition, with the advent of cloud-native solutions, many systems now have to consider heterogeneous deployment locations, with part of the infrastructure on-premises and part as service — cloud-native solutions such as Microsoft Azure, Google Cloud, and Amazon AWS. Infrastructure tools, such as Terraform, provide engineers with an abstraction layer that may be used to configure infrastructure agnostic of its cloud location. Terraform allows engineers to define infrastructure as code that can be used to create resources on all the cloud-native solutions.

By using Terraform associated with Ansible, Chef, Puppet or other solutions to manage self-hosted, bare-metal infrastructure, it is possible to achieve zero-touch provisioning. Since these frameworks allow a coherent configuration, it is possible to have a standard set of basic tools on every infrastructure node, including monitoring stacks — supporting disaster detection and potential recovery.

3. Networking orchestration

These nodes are also interconnected by powerful networking capabilities; configuring the network interfaces and adapters to ensure each machine is secure while still usable presents another orchestration challenge. In addition, it is also necessary to configure network-specific equipment, such as Access Points (APs) — that provide Wireless Connection to networks — routers and switches.

Considering the origins of the networking concepts and of the Internet itself, such pieces of equipment tended to be closed and manufacturer-specific, which meant each manufacturer had its specific set of procedures to configure a router, an AP or a switch.

With the growing number of networking equipment, the need for a unified, non-static configuration that was manufacturer agnostic increased. Therefore, the concept of Software Defined Networking (SDN) was developed, allowing network devices to be configured dynamically by a controller.

Figure 4: Configuring network devices before and after the SDN paradigm

With SDN, and based on network metrics or fixed policies, controller applications can configure network devices from scratch or change their configuration. Since a controller can control a set of network devices, it can effectively control the entire network and coordinate them toward the sequence of tasks needed for complex workflows. Examples of such workflows include creating a new wireless network, onboarding new machines onto an existing network or creating a slice of a 5G network with specific guarantees of security or Quality of Service (QoS).

At Ubiwhere

At Ubiwhere, as a creator of new, exciting solutions for Smart Cities and Telecommunications, the orchestration problem is considered and solved in many ways.

At the core of our flagship product, the Urban Platform, we are migrating to Kubernetes so it can automatically scale the product and ensure it is always running — even if one node fails. Within the core of our Edge Framework (MEC), we consider Kubernetes Operators, a set of powerful extensions to the core Kubernetes concept, to automatise the installation of applications within newly added edge nodes.

While some of our projects and products can and do consider the usage of cloud providers, some solutions require — for privacy, for legal reasons or just for cost reasons — self-hosted solutions. To ensure that all infrastructure, self-hosted or cloud-based, has a coherent, similar set of configurations, Ubiwhere is migrating towards infrastructure orchestrators such as Terraform. Terraform allows us to write the configurations once and target Azure or AWS with configuration files that can be versioned and written similar to the self-hosted configuration scripts. We leverage Ansible within our self-hosted infrastructure to configure new onboarded nodes and virtual machines on top of those nodes.

Orchestrating every part of our solutions is an ever-evolving work of automating and coordinating ever-evolving systems, and Ubiwhere is and will continue to work on these solutions.

--

--

Pedro Teixeira
Ubiwhere
Writer for

Computer & Telematics Engineer | Tech Lead @Ubiwhere