In my last post I argued that Kubernetes was a singularity, a BIOS for our global computer, that would propel the information industry to new heights. In this post I explore some of the implications of applications going “Kubernetes native”.
The shift from hardware to virtualized hardware drove enormous efficiency gains, but the applications running on that virtual hardware remained essentially unchanged. This is what many call “Lift-n-Shift”: for little to no risk, tremendous operational savings could be achieved simply by re-hosting legacy applications into virtualized infrastructure.
The Cheese has Moved
However, you are missing the point if you now Lift-n-Shift your apps into containers. Yes, containers can run at somewhat higher density on infrastructure when compared to VMs. But containers are much more interesting because of what developers can do when they rearchitect their applications natively for the Kubernetes environment.
It’s been noted elsewhere that 5G and Kubernetes are co-evolving. This is the game-changer. No longer do you write an application designed to run on a set of servers in a data center. With Kubernetes you are designing an application to run on the world’s computing infrastructure. This can be thought of as a rich computing environment that spans from data center to the mobile edge and which exhibits a pyramid of access latencies:
Because it is a level-triggered, intent-based system, Kubernetes is much more stable and reliable than other orchestrators in the richness (and chaos) of this environment. To quote James Bowes: “thinking about the problem as a level triggered system has led to an architecture that is clean, simple, and does what the user wants in spite of the inherent problems in distributed computing.” Where BIOS gave us a standard for reliably accessing a single computer’s memory hierarchy, Kubernetes gives us a standard and reliable way to access this much richer global computing latency hierarchy.
The Power to Disrupt
Because of the absurd richness and power of this global infrastructure (Bajillions of cores, TiBs RAM and EiBs storage, see Timothy Pricket-Morgan), the potential of Kubernetes-native applications and services boggles the mind. When designing new native applications and services on top of this new K8s “BIOS,” we should be thinking very precisely about building to native K8s abstractions. Applications that are designed to take advantage of the full power of this environment will break new ground when compared to a blind Lift-n-Shift migrations of legacy applications to container-based infrastructure. The Juniper Networks SDN security layer has the right technical vision of this infrastructure.
In this environment, natively designed applications break apart into their constituent elements and can deploy and scale across the latency pyramid where they fit best. At the core sits a bunch of data buckets and pipes. In the middle intermediaries and business services reign. And at the edge (where users are roaming about), the elements dance across the edge infrastructure following their constituents in what I imagine as a containerized “ballet” focused on delivering the best possible user experience at the lowest possible latencies. In fact, the particular requirements of the next generation of 5G applications demands applications and infrastructure be latency aware.
Real World Applications
For a real-world example of how Kubernetes transforms enterprise infrastructure, watch Katie Gamanji’s excellent keynote at KubeCon Europe about how Condé Nast built a globally distributed, yet centrally managed, IT infrastructure completely with Kubernetes. Notice her focus on “Origin Latency using market proximity for highest user experience.” What’s nice about the Condé Nast design is that their developers’ CI/CD experience mirrors the end user experience — locally deployed & managed, globally connected.
A recent tutorial by Francoise Paupier shows how data scientists can automate the Ops out of data science completely such that they can “deploy high availability models in minutes even for people not familiar with distributed computing” while leveraging local GPGPUs.
Future autonomous cars are envisioned to connect to one another as a mesh while simultaneously tapping data in the cloud, implicitly taking advantage of local latency. An autonomous vehicle traveling at 60 miles an hour that has to wait an extra 200 milliseconds to discover that it should start breaking will start 18 feet too late.
The huge data volumes involved in genomics almost begs for a distributed architecture designed to support small medical offices running local clusters for fast diagnosis while taking selective advantage of the huge genomic datasets available in multiple clouds.
A K8s-native, blockchain-based personal finance application could revolutionize the banking industry by removing middle men entirely.
The End Run
Locality awareness and application latencies matter in this new environment. The potential for disruption abounds, but only Kubernetes-native applications will be able to deliver breakthrough user experiences on the pyramid.