Scaling Containers in the Enterprise — Viewing Distributed Systems through the Guise of Computational Complexity

Kristopher Francisco
May 9 · 8 min read

Evolute’s leaders often get the question of why we stick to such truths such as a single data and control plane across disparate operating systems, methodologies for simplifying configuration and management that are native to a container environment and protecting what can be taken on by operating system before reaching container’s init subsystem. We’ve found this solves a number of challenges including —

Ease of Management — the least number of steps to operationalize (deploy) software and manage operations

Scalability — from decreasing resource utilization (greater density) to increasing performance (horizontal scalability, fewer round trips

Security — the ability to ensure non-repudiation, OS authorization and hardened access control across operating systems

Computational complexity, a theorem in computer science was created as a means to evaluate the “feasibly decidable” path or ideal path for any (or all) time and space challenges (i.e. compute resources) in a given set. When it comes to distributed systems (or the challenges in large scale infrastructure architectures), while containers have been proven to solve many challenges “faster”, we do not always have a mechanism which to evaluate the efficiency of an enterprise’s architecture (or our own compute offerings). Throughout this article we explore some of the benefits of achieving the best case scenario (e.g. arithmetically represented by Ω) of the theorem and how this focus on architectural design, simplicity and the human-computer interaction (for this new ephemeral style workload), offers value added for any enterprise scaling with the technology.

Ultimately, we find these decisions to not only improve performance for the technology in enterprise environments, but to bring organizations closer to the microservices in the cloud, to the promise of “no downtime computing”, and many other benefits in large scale cloud or distributed edge compute architectures.

Moreso, we present a basis for the “why” certain “truths” or decisions have been made in achieving a container environment native for this new style of computing (i.e. we often refer to run-to-failure engineering as it relates to ephemeral computing). As engineers come across challenges -such as the “traveling salesman”, how one gets from point A to point B given a plane or set of paths, to “perfect matching”; as we all search for in our dating lives, can offer solutions for achieving order of magnitude improvements in Layer 7 routing to being able to determine which workloads should be co-scheduled, should be located on the same box or even should be in close proximity, respectively — they are able to leverage the theories of this model to evaluate the effectiveness of their architecture. This can be useful as one evaluates their true goals whether its business evolution in the digital transformation, pure-bread microservices for their cloud environment or just darn good container infrastructure for their compute needs.

Lastly, while evaluating for computational complexity benefit the software and hardware processes, even when it comes to process optimization in our OS kernels for containers, this process has also led our engineers to create the most effective operations (and design) across the container journey. It is no coincidence this author found references to his old alma-mater at Apple referencing the topic¹ with a simple search on computational complexity and the company’s name. As we step through the logic and philosophy of some of the approaches in the theory, such as bounded arithmetic (think finding the “least common denominator” for architectural complexity) and strict finitism (for making the seemingly infinite challenges with the architecture, finite); we hope you too will evaluate and understand some of the needs to consider the maturity of the multiple facets of an architecture as we continue to learn of Fortune 50 organizations leading energy and automotive on the fence of the effectiveness of the technology.

Bounded arithmetic

While an early and primitive approach of computational complexity theory, much like equal-delimited files to data serialization, this approach explores the limited set of information needed. In the Evolute environment, we found one of the greatest challenges organizations have is simply deploying a stable container architecture they can use across a datacenter (or even across the globe). As the amount of enterprise data computed outside the datacenter moves from 10% at the edge to 75% at the edge(see MEC), the cost of operationalizing new business becomes key to the ability to increase adoption and/or make widespread this technology. At Evolute, we explored the complexity of this phase of human-computer interaction via the guise of “bounded arithmetic”. An example of this is here:

This equates to less than 20 lines (we actually require 2) for configuring the entire platform across container, network and service infrastructure. These optimizations continue to occur throughout our environment through leveraging natively systemd or the container for non-dependent systems infrastructure to our ability to create a single API to drive a native platform for the ecosystem.

While a small and somewhat terse example of achieving simplicity at scale, this ability to find the least-common configuration needed and ability to define configuration (at rest) allow for far more deployment, advanced configuration, and scalability for compute operations. This is very important in places where the cloud can’t go such as edge computing (where real-time events happen in the field), in manufacturing or the operating room. In the end, by de-duplicating the amount of configuration needed across storage, networking, containerizer, service discovery and other core-infrastructure needs, we were able to reduce the amount of configuration to the least complex “formula” for deployment (as well as for software definitions) for any container environment. Thus, we believe for enterprise deployments we present a model for ensuring any infrastructure can be defined and deployed in record timing.

Architects, DevOps engineers, and administrators can benefit from this evaluation as they construct and determine their own efficiencies especially with the demand and complexity in today’s business and next-generation use cases.

Strict Finitism

More mature logic for ensuring a complex web of data (or the data set of many software and architectural components in a container architecture) must take more finite approaches to achieving simplicity and scale in a container environment. As we explore that at the crest of an enterprise, resources are always limited and the “amount of problems which can be solved” is always challenging (thus requiring prioritization even in IT), arriving at the destination of horizontal scalability (i.e. the most efficient method to run as many software, product and/or business functions concurrently) is a nirvana when it comes to “computational complexity” for enterprise (and distributed) systems.

Strict finitism is the idea that a problem, which can have an infinite number of solutions, can arrive at a finite approach given a specific problem set. With finitism, we see an infinite set of architectural complexities as attempting to reach a finite “formula” in relationship to performance and compatibility (e.g. by having any physical node with the ability to natively run the container workload, horizontal scalability, with any operating system presents the means for achieving the most resource utilization per the given set of problems or software) in the compute environment..

By allowing any software (Windows or Linux), graphical or non-graphical to participate in the container operational infrastructure (e.g. service discovery and DNS awareness across bare metal, virtualized or native container environments), we provide a ubiquitous layer of compute which is organizationally simple to manage, scalable and lacking in complexity for any enterprise environment.

Operationally, as many great engineers will take the shortest path to solve a problem, we too find that models which allow us to solve the majority of engineering challenges or software use cases as the best path a “compute nirvana”. By ensuring a single platform which can scale both hyperscale compute environments at Apple, Google or Facebook as well as industry-based compute environments across energy, banking, and healthcare, we allow for architectural scalability with the thrills of no vendor lock-in a private or public cloud environment.

(PS would you believe none of these organizations use Kubernetes as their core container platform? While we are immensely compatible with it, I believe organizations need to look beyond the scheduler — another story for another day).

For enterprises, we continue to find use cases where the need to bring the cloud in places it cannot go, as is with edge computing (or MEC). Alternatively, as our customers put it, Evolute is often the first software which allows them to operationalize their entire compute ecosystem (greenfield and brownfield) in the same platform.

This continues to show itself time and time again as was the case through our partnering with Intel in IIoT use cases (showing the ability for software based on their RTOS) being containerized, Samsung as they leveraged POSIX based containers backed by zfs and Solaris-based zones and many other customers including Windows graphical or server-side workloads with the ability to move across systems without VDI or Windows servers albeit natively in a container environment.

For the latter, this architecture provides for strict finitism in the number of applications which can interoperate and thus scale in a container environment; hence meeting our expectations for a distributed compute platform, to leverage all existing hardware (resources of time and space) to solve the most amount of software functions in a given organization.

This ubiquity of compute comes throughout many other layers of our architecture including:

  • Having a single data and control plane via BGP (bird, e.g. via calico) and Windows BGP to natively talk across OSs
  • Enabling a single platform architecture for deploying, containerizing or instantiating any application whether server based or graphical (scary, we know)
  • Ensure compatibility via different scheduling hierarchies (Kube, Docker, Mesos, or future capacity-based resource schedulers et al)
  • API-driven container and scheduling management allowing for any layer to be substituted in the future (an anti-vendor lock-in) to ensure organizations can seamlessly move to thenewest paradigm achieving Ω for distributed compute architecture

As administrators, CTOs, and engineers design, architect and evaluate their own ecosystem, ensuring there compute platforms scale architecturally and in its ability to handle the most distributed software challenges is key. Whether through adopting the latest scheduling techniques, service discovery and distributed consensus of infrastructure metadata, or its ability to communicate through lightweight and scalable network topologies; evaluating an architecture for its ability to meet the highest order of compatibility or horizontal distribution are key in a container environment.

Through the guise of computational complexity and the real-time challenges which affect an enterprise’s ecosystem, organizations can evaluate their ability to meet and/or exceed their next generation compute needs. Through our constant evaluation of this, we believe the Evolute platform is a model and a resource for organizations working to achieve maturity in their enterprise organizations. We hope architects, engineers, and operators evaluating their environment and ability to scale, too can leverage these insights to make the right decision for their organizations.

¹ Reference to computational complexity in software : https://devstreaming-cdn.apple.com/videos/wwdc/2013/224xcx5x1y1yx8ra5jbmfyhf/224/224.pdf

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts

Kristopher Francisco

Written by

Blog entries from Evolute’s founder and tech CEO

Faun

Faun

The Must-Read Publication for Aspiring Developers & DevOps Enthusiasts