Unsolicited Kubernetes Predictions for 2020
Cloudy, with a chance of YAML
This post contains my predictions for Kubernetes and the ecosystem in 2020. I tried to make each one as measurable as possible so I can grade myself at the end of the year. I also tried to avoid any obvious ones (vague stuff like Kubernetes will continue to grow!).
Disclosure: I work at Google, on Kubernetes-related things.
Vendors catch up
Every three months the fine folks of sig-release get another stable release out the door. The community officially supports 3 major versions at a time, which means you need to upgrade pretty frequently to stay on a supported version of Kubernetes.
Unfortunately, this pace has been faster than many of the cloud vendors have been able to keep up with. This has left users and developers in the ecosystem in a tough spot when choosing which versions to target.
Prediction: AWS, GCP and Azure ship the last release of 2020 within a month of the stable release.
The general availability of Custom Resource Definitions in Kubernetes 1.16 has ushered in a new era of APIs. Vendors, end users and everyone else in the community are building CRDs to represent everything from email notifications to delivery pipelines. Unfortunately namespacing CRDs with custom API group-version-kind tuples. can only go so far, and every new tool installed into a cluster results in a handful of new resources appearing.
These resources are beginning to conflict, and things will only get worse. Kubernetes, Istio and Knative each bring their own definition for
Service, which are incompatible with each other and even themselves across versions.
Thankfully CRDs do not need be tightly coupled to their implementations, even though they are for the most part today. One resource definition can be implemented by any number of controllers. I’m hoping that we can get a handle on this resource namespace boom early, with a community-hosted hub of common resource definitions.
Prediction: A community owned “CRD Hub” will emerge to collect standard resource definitions that are implemented by controllers in other projects.
Frameworks get useful
The majority of useful software frameworks in use today have come from practitioners, not vendors. Ruby on Rails came from DHH’s work at BaseCamp. Django came from the web team at the Lawrence Journal-World newspaper. Spring came from Rod Johnson trying to write a book of examples using J2EE.
These tools were built to scratch actual itches, by developers living and breathing the problem, not from PaaS vendors looking to sell cores. This type of software can’t be designed in a classroom or laboratory. It must be built by extracting useful patterns and interfaces from real-world applications.
I don’t think all the existing frameworks will disappear in 2020, they’ve built some useful and impressive technology. I’m betting that we’ll see some new work open-sourced by some of the largest practitioners that ties the useful parts of these systems together in ways that make sense to actual users.
We’ll stop configuring observability, logging, monitoring, debugging and tracing. Convention-over-configuration systems will be built, once we understand the right conventions, allowing us to get it out of the box. Kuberentes is amazing because it lets you build reliable, scalable, observable production-grade applications on top of containers through YAML. But all of this wiring is the model view controller problem of today.
Prediction: a non-vendor practitioner will open source a framework that gains significant adoption.
Native language bindings make a comeback
One of the strengths of Kubernetes is that it makes it easy and possible to port just about any application to it without requiring modifications. This makes it possible to leverage some of the cloud native benefits from legacy applications without a rewrite. The entire operator pattern is a method of wrapping non-k8s aware apps with a k8s-aware layer.
We’ve been focusing on hiding Kubernetes from our applications for too long now. Truly cloud native applications can take advantage of the platform they are running on by becoming aware of the Kubernetes APIs. We’ll see a rise of apps that follow the controller pattern and store data in-cluster via CRDs. Libraries will emerge that make distributed-locking, leader-election and other formerly-complicated patterns trivial.
There’s been some work here, but it all saw limited adoption. I think they were just too early. No one was rewriting and rearchitecting around k8s yet because it was too new and risky. Metaparticle was one example. Quarkus is another more recent one for Java, that looks very nice.
Prediction: We’ll see a Kubernetes-native library or framework gain significant usage in at least four major languages (jvm, go, Python, Node, .net, Ruby…).
Lots of smaller events
Kubecon is getting too big. San Diego had ~12k people. Barcelona was similar. Kubecon will obviously remain as the flagship event for the entire ecosystem, but I think we’ll see more focused events catering to different interests. Kubecon will remain vendor-heavy and get even more sales-focused, while we’ll see smaller events targeting more technical audiences, without the vendor pitches gain popularity.
I think these will look like the recent Kubernetes Formum events in Korea, Australia and India, hopefully spreading to even more areas to make attendance accessible to everyone. This will probably take another year though, given the long planning cycle, so I’m expecting to see these rise in 2021, with planning in the second half of 2020.
Prediction: We’ll see 12 or more of the “Kubernetes day” style events in 2021.
API without k8s.
Kubernetes is not just a cluster orchestrator, it is also an API framework. Kubernetes-style, declarative APIs follow a set of principles that make working with them easy and consistent.
The APIs represent desired state, leaving actuation to controllers that follow a reconciliation loop. Nothing here is groundbreaking or overly complex — the power comes from simplicity and a very well-factored separation of concerns. I think the community is starting to realize, through the adoption of CRDs, that declarative APIs are useful in many more places than just container scheduling.
Prediction: A large project will exist by the end of 2020 that exposes k8s-style APIs without a Kubernetes cluster.