Why are Containers so Disruptive to the Data Centre?

Enterprise architecture is usually a mixture of technologies, platforms, stacks, licenses and maturities, all owned by different teams and hosted in different data centres and clouds.

This diversity is just the sign of a long life — Enterprises generally didn’t appear yesterday — but it can result in high operational costs and inefficient use of resources.

Utopian Visions are so 20th Century

Is this technical mix just debt to be fixed with a Utopian clean slate?

Or should we find a way to embrace the diversity?

What if we had a common, computer-regulated way to manage heterogeneous services and environments? Such an approach would ultimately allow systems to share infrastructure more efficiently and to intelligently handle environmental changes without human intervention. In the not-too-distant future we believe this self-driving behaviour can be delivered using containerization, orchestrators and operational applications.

Don’t Fence Me In?

Linux containers, as popularized by Docker, are an OS feature that improves developer and operational productivity.

Containers securing server against unruly players

Operationally, containers provide a fast and lightweight (kernel-space) mechanism for fencing off physical resources such as CPU, memory or bandwidth and assigning those resources to a specific application or group of applications.

Developmentally, Docker containers provide a common interface for encapsulating any application and its dependencies and therefore, at sub-second speeds, enabling it to be deployed to, moved between, or controlled on physical or virtual servers. This encapsulation can also play a very useful role in continuous delivery (CD).

Orchestral Manoeuvres, Not in the Dark

Both the operational and developmental strengths of containers are being exploited by a new form of operations technology called container orchestrator engines (COEs). These include Docker’s Swarm, Mesosphere’s Mesos, Google’s Kubernetes, Amazon’s ECS and Hashicorp’s Nomad.

Container orchestrators sit across one or more data centres (DCs) and enable efficient placement, rapid movement and safe co-location of workloads. These orchestrators expose APIs that can be used by 3rd party tools to program workload placement and movement at speeds that cannot be achieved with VM orchestration — roughly two orders of magnitude faster than traditional VMs.

Self-Driving is the New Black

The introduction of fast, programmable orchestrators into data centres creates the possibility of real-time data centre control using a new breed of app: Operational Applications.

Operational Applications will use container orchestrator APIs to autonomously drive DC behavior. Not just deployment — realtime control. Operational Application decisions can be based on real world metrics, such as external request response times or estimated job completion times, combined with business SLAs and prioritisation guidelines. Operational Applications will replace current manual or scripted operational tasks, which are slow, labour intensive and error-prone.

Using COEs and Operational Applications, Enterprises can harness more of the ~85% of their DC resources that are currently, on average, wasted. Enterprises can also free up the time of their expensive and highly-trained technology workforce to work on new products and features.

Data Centres are Revolting

COE APIs could be as revolutionary to data centres as OS APIs were to the desktop PC.

Killer Operational Applications that increase resource utilisation, introduce self-healing or significantly cut DC energy usage are already under development.

However, there will be future operational applications that introduce functionality we don’t even know we need yet, maybe controlling QoS and energy use in solar-powered DCs to handle variations in available power, or repelling cyber attacks by identifying and propagating system-wide patches.

No Need for Oracles

Real-time responsiveness combined with more effective resource utilisation should remove the need for data centre demand prediction — a notoriously difficult science. Application and SLA-aware Operational Applications can re-direct DC resources if required away from lower priority workloads such as test servers or non-critical production processes and towards high priority, user-facing services.


The development-friendly encapsulation features of containers combined with their operational features of safe co-location and very fast deployment are being used by container orchestration engines to provide APIs for data centres and their contents.

New operational applications will use these APIs to control diverse, containerized workloads across multiple data centres, achieving goals such as improved sharing of resources, reduction of over-provisioning, self-healing and self-defence.

In our opinion, operational applications and COE APIs create the potential for significant increased exploitation of data centres on a par with the business exploitation of PCs in the 1990’s or the exploitation of smartphones in the past decade.

Anne Currie is the CTO of Microscaling Systems, home of MicroBadger. We advocate more effective container use with metadata. Let’s organise!

If you liked this article, please hit the Recommend button below so that others might be more likely to find it. And if you’re using containers, you should definitely check out MicroBadger to explore image metadata.

Follow Microscaling Systems on Twitter

Microbadger Docker Image Labeling




Containers, microservices and infrastructure

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Anne Currie

Anne Currie

SciFi author interested in tech, engineering, science, art, SF, economics, psychology, startups. Chaotic evil.

More from Medium

Kubernetes — What is it and why its popular?

Is Blue-Green Deployment Strategy just a Hype?

From Monolith to Kubernetes Architecture — Part IV — GKE / GCP

Local Development Pipeline Using Skaffold on Kubernetes