Kubernetes as a Common Ops Data Plane

TL;DR IMO Kubernetes is a database and it’s positioned well to help eliminate several of the other operations databases hanging around our stacks. Doing so would be great for the consumer as long as engineering and the steering committee can pull it off.

Jeff Nickoloff
9 min readAug 2, 2018

--

by Jeff Nickoloff — Topple, August, 2018

I’ll be the first to say that I’ve been very critical of Kubernetes over the last few years. One of my jobs as a consultant and engineer is to represent the interests of technology adopters in reviewing and evaluating projects and products on technical merit. Lately, I’ve been reflecting on where Kubernetes fits in engineering stacks. Today I want to write about why I believe it will be a critical low-level platform for the entire next generation of operations tooling.

I don’t think Kubernetes is a service orchestration system or a resiliency platform. Those are optional features. At its core Kubernetes is a CP clustered key-value database with a schema-less control plane API.

Background

Recently Kubernetes released a new first-class resource type called a Custom Resource Definition (CRDs). CRDs allow administrators to define arbitrary resource schemas. You can think of these like table definitions in a more traditional RDBMS. The Kubernetes API reflects the resources defined in its database and provides a dynamic CRUD interface for each of them.

Suppose I wanted to build a trouble ticketing system. I might create a CRD for a type Ticket. The objects that I create with that type might include the properties: title, date reported, reporting user, etc. If I write a bit of YAML describing validation rules for data model and add the CRD to a running cluster, the cluster API will be updated with CRUD operations for working with that type of data. I’ll be able to use kubectl to work with Ticket objects, or integrate with the generated API from other software.

CRDs make Kubernetes one of the easiest ways to roll out declarative CRUD APIs for custom data types. No code is required because the APIs are defined by structures described with YAML and stored in etcd (the clustered KV database).

The objects modeled with CRDs have associated state machines. Transitions between states and other workflows are powered by programs called Controllers. Kubernetes ships with several “core controllers” but adding your own is within reach for most programmers.

Vision

Kubernetes is a database. It is etcd with some reflective API magic and a pattern for building external “database triggers” called controllers. And I think its market position could help it fill the biggest gap in the DevOps space: common data plane and automation engine.

Any proprietary ops integrations suite after two generations of developer churn. AKA the Seattle gum wall. ©vancitybuzz.com

We’ve had databases forever. That wasn’t a gap. The gap has always been the long-tail integration between focused products. Custom integrations and workflow automations that our organizations build up like are bubblegum holding the popsicle sticks together. As our platforms and engineering teams change new wads of gum are stuck onto the growing pile of engineering liability. Bit-rot everywhere.

The integration gap impacts consumer adoption. Tool suites that play well together have a strong market gravity. Tooling in the Atlassian ecosystem comes to mind. You might not like all of their tools, but you need a product from each of the categories they serve. It is easier to adopt the tool you don’t like in one category so that you can easily integrate with the tool you do like in another.

There are other Kubernetes like systems, but it feels like the first project that independent vendors are rushing to rearchitect upon. In doing so they are creating a sort of market momentum that others are following and exposing their data models to direct integration and adoption by other projects. It is creating a network effect and building a new vendor-agnostic suite of tools that play well together.

Reducing Complexity by Consolidating State Management

The tools that we use in our infrastructure are often built or sponsored by specific companies or groups. They are opinionated and ship minimal native integrations. Kubernetes as a leaf orchestration platform is no different. But all of those products we piece together to “compose our platform” have their own databases.

Take the tour with me: version control system, build system, artifact repository, artifact index and package management, delivery pipeline automation, test result repository, ticketing system, infrastructure orchestration, secret management, configuration management, persistent volume orchestration, public key infrastructure, service orchestration, software defined networking, pager management, dashboarding… These are all distinct product categories, each with a world of projects and products that depend on tightly coupled databases.

The degree to which you hate stateful applications like databases increases exponentially with the number of stateful application you operate. Dealing with state sucks. There are too many ways to break a state management contract.

Kubernetes is complicated, but if I could retire 16 databases in adopting Kubernetes then it is a meaningful simplification.

It might be worth jumping on a new CI/CD project like Argo to stop managing Jenkins backups. I might choose etcd for my application KV database instead of Consul because I can automate etcd operations with the etcd operator.

Scraping off crusty integrations. ©NBC News

At some point in the future once a critical mass of vendors have augmented the catalog of Kubernetes-native ops tooling then consumer organizations will have incredible mobility and previously unachieved automation potential. The open and shared nature of Kubernetes objects enables arbitrary mash-ups of that data. Peer subsystem controllers are free to integrate on the data model so competing projects will be able to share resource definitions and trivialize migrations.

But why Kubernetes instead of X?

From what I can tell the only difference between Kubernetes and other similar extensible platforms or databases is marketing momentum. The name association with Google, Microsoft, and RedHat, along with the pooled marketing budget of the CNCF and member organizations bought it a great community of contributors. By extension it gained a wealth of open tools, enhanced docs, open training material, and other important stuff. Most companies selling operations or orchestration products are Kubernetes companies at this point.

Future of the Industry: Hyperbole?

Let’s pretend for a moment that I didn’t spend several paragraphs describing an enterprise service bus. Let’s suppose that API contracts are as simple as the structured data and we all magically agree on data semantics and implicit or emergent behavioral contracts. Pretend that this data plane doesn’t fail for all the same reasons ESBs did in the mid 2000s and that unencapsulated data is not poison for a system over time.

Instead suppose Kubernetes is adopted in this fashion. Then what does the future look like for clouds, tooling providers, and the people with skin in the game?

SaaS infrastructure tooling is dead. Fin. There is no room for SaaS once consumers have reclaimed their ops metadata and ops are commoditized. SaaS tooling will move back to a software publishing model or fall to new developers who publish instead of serve. I’d laugh if we start calling tooling data models “formats” as all the major products in each category suddenly start being “compatible” each other’s CRDs. It will make build system metadata feel like Word documents.

Non-AWS clouds have shot themselves in the foot. In an attempt to move in on AWS they have marketed the criticality of multi-cloud or hybrid-cloud adoption. In creating Kubernetes — the instrument for achieving that vision — the cloud companies that pushed it will have created the very thing that negates their value-add. It is easy to run a single database, and the cloud-specific value-add services are drop in extensions on Kube.

Data center engineering is a well-known space and there are lots of great data center companies that we don’t really think about as cloud companies. The cloud value-prop is in creating economies of scale for engineering while only paying for power and bandwidth usage. But when the only common contract required between various Kubernetes deployment targets is power and bandwidth then we don’t need the engineering anymore. The engineering is provided by the SaaS vendors turned software publishers.

Serverless is a reaction to software configuration management pain, process supervision burden, and clunky VM based “isolation-for-managability” primitives that limit the autoscaling expressiveness and increase scale-up latency. “Server” in serverless is focused on the burden of maintaining the long running process, in managing its life cycle, and dealing with coarse deployment primitives. It isn’t about some aversion to computers. We mastered plugging in machines a long time ago. The future of serverless is servers. The serverless vision can be realized with cleaner and higher-resolution APIs.

Roadblocks and Challenges

First, Kubernetes will need to find a way to succeed where ESBs failed. The proposition that APIs are only their structural components (action, input parameters, output shape, and error cases) is naive. The Kubernetes API communicates nothing about semantics or implicit properties and it will be difficult to anticipate how competing controllers might effect the state machines of shared objects.

Second, Kubernetes needs to shape up for real-world deployments at a scope beyond a single data center. Continuous deployment, and other cross-stage and data center operations workflows operate with the broadest scope. Kubernetes needs a solid federation story and external API enhancements to detect and prevent stale updates.

But beyond Kubernetes, the whole vision depends on tooling vendors racing to rebuild some critical mass of the projects in their space as CRDs and controllers. Maybe they’ll adopt the Operator framework.

But that gold rush can only happen if the members of the pay to play marketing engine called CNCF sort of agree that this is the future they want to build. Kubernetes changes pretty quick and there is nothing stopping that consortium from pivoting month-to-month. But like any smart group of product managers they’ll push changes that drive adoption.

The whole thing is predicated on broad vendor and consumer adoption. We’re not there yet, but if the tooling ecosystem keeps moving the way it is then the allure of automated operations will be too good to pass up for consumers.

In Conclusion

This is may be a bit fantastical, and maybe I’ve missed something. Maybe we’ll be stuck with leaf Kubernetes clusters and a whole bunch of complicated state to manage. But as a consumer I’m excited for this potential because Kubernetes as a data plane puts a ton of control back in the hands of local engineering teams and creates new opportunities for automation.

I’m concerned that adopting the pattern at scale would create a short-term gold rush for new tooling publishers, but I think it would ultimately hurt cloud adoption and tooling SaaS revenue. That concerns me because software licensing is a rough business model and realigns tech interests with other copyright pushing businesses.

When you buy into Kubernetes you’re taking another third-party dependency owned and controlled by a consortium of the biggest technology vendors around and at the most sensitive portion of your operations platform. Do so with open eyes, make sure you understand how interests are aligned, buy support contracts and abuse them.

Community

If you’re in Arizona we’re looking for people interested in sharing their experiences and insight at the 2018 Phoenix DevOpsDays. The event is in late October and the CFP closes on August 15th.

Who am I?

My name is Jeff Nickoloff. I’m a cofounder of Topple a technology consulting, training, and mentorship company. I’m also a Docker Captain, former Amazonian, and someone who just loves software engineering.

--

--

Jeff Nickoloff

I'm a cofounder of Topple a technology consulting, training, and mentorship company. I'm also a Docker Captain, and a software engineer. https://gotopple.com