Is kubernetes too complicated?

Mark Betz
6 min readOct 9, 2018

--

A post by Caleb Doxsey a week or so back generated a fair bit of discussion when it was shared on hacker news. In it he talked about deploying a small project on kubernetes and demonstrated some of the techniques he used. The resulting comment thread highlighted a debate that is currently common in certain software communities, i.e. that kubernetes is either the awesome-est sauce ever and you should use it for everything, or it’s a massive pile of overkill for all but the largest organizations. The perspective of the commenter, either pro or con, is often likely to be further extended to containers themselves. Docker is awesome… docker is a crutch for people who can’t even chroot, etc.

In this post I want to talk about why the truth is, unsurprisingly, somewhere in the middle, but more than that I want to explain why I think containers and orchestration are a fundamental shift in the level of abstraction at which we interact with compute resources. Many of the most significant changes in the way we practice software development, deployment and operations over the last 50 years have been changes in the level of abstraction of the interface between us and the things we’re working with. Abstractions are incredibly potent and useful tools for grappling with (seemingly) unbounded complexity, but when good new ones come along they are not always welcomed with open arms. I think there are some fundamental reasons for that.

One of the compelling things about the arguments against any new abstraction is that they’re always rooted in truth. When some voice intones that you don’t need to learn this new thing because you can do all the things it does using the old thing you already know, that is a true statement by definition. It is in fact fundamental to what an abstraction is. The semantics of an abstraction should apply to all the more concrete complexities for which it is a substitute. So if you were a grumpy C programmer casting a jaundiced eye on C++ in 1990 you could argue convincingly that inheritance, encapsulation and polymorphism were available in straight C programs if the programmer were thoughtful and clever enough, and you’d be right as rain.

I taught C++ in the mid-90’s and I heard that argument repeatedly in my classes. Still C++ eventually replaced C for general purpose programming, and the number of dissenters grew less and less. C++ succeeded in getting adoption because it offered abstractions that made it easier and less tedious to use the features of C to produce good, reliable programs. A few years further on we found ourselves moving to “managed” languages like java, C#, python and ruby. Again the voices were there to proclaim that these languages were too slow, garbage collection was bad, interpreters were bad, dynamic typing was horrible. But they succeeded to different degrees for the simple reason that human brains are the most constrained resource in our business. What makes it easier to produce good programs (meaning fit for their use case) has a good chance of success.

What is true of software is also true of hardware. Computing hardware used to be hard. Just to add a component to a system meant manually configuring port and interrupt assignments. Memory timings and pretty much everything else the BIOS needed to get the processor running had to be understood and manually configured. Nobody who wasn’t a hardcore nerd built their own computer. Today you can buy a few off the shelf components, plug it all together and 99% of the time it will just work. This happened because people in those industries got together and agreed on abstractions to define the interfaces between the things they worked on. That made those interfaces simpler and the interactions more reliable.

Which brings me to the intersection of software and hardware, and perhaps the most fundamentally significant new abstraction to come along in quite some time: cloud computing. In 2000 I helped set up a small data center for a startup I was involved in. The big argument then was whether to host it ourselves or buy space in a co-location facility. We opted for the former, and built walls, key card entry and camera surveillance, air conditioning and backup power systems, installed racks and big expensive servers and network switches, wired everything together. When we were finished (at a cost of about a hundred thousand USD) we had a pretty fixed amount of capacity and flexibility was essentially zero.

Today I can have any number of servers running with a few commands. I can easily create networks and load balancers and vpn tunnels, as many as I need. I can spin up a highly available clustered SQL database in a couple of minutes. You can do these things from the command line, a web console, or you can drive your stack off something like terraform or salt. And you know what? It’s still too complicated. Human brains, time, these are things we’ll never have enough of. One of the reasons it’s still complicated is that while these new abstractions have made it ever easier to create and manage infrastructure, for a few years the level at which we managed software and its dependencies on that infrastructure hadn’t quite caught up.

But now we have containers. I recently read a comment somewhere that said (I paraphrase from memory) “Docker came along and hyped containers until they became relevant.” This is both wrong and unfairly dismissive. What the folks at Docker did was take a set of complex operating system features — kernel namespaces, cgroups, virtual network devices, etc. and combine them into something the average engineer could quickly understand and use. They created a useful abstraction, and you’d have to be rather biased, I think, to dismiss the success of containers as hype. They’re changing everything about how we deploy software, for reasons so good and well-established at this point that I’m not wasting any more time on them here.

Finally, orchestration. Maybe the word is part of the problem. Something like “container deployment and interoperability” might have sounded more approachable. Thing is, if containers are a better way to deploy software, and a flexible pool of cloud resources is where you’re going to deploy them, then you need something to configure them, run them, watch them, connect them to each other and the outside world where needed. You need something to bolt them to persistent storage or databases. Back to that truth about abstractions: you can do all of these things without kubernetes. You can also do all the container things without Docker. Like Docker, Kubernetes is just the best thing that has come along to date that encompasses most of what’s needed in a set of useful and approachable abstractions.

I have heard kubernetes described as an operating system for the data center, and I think that is just about right. An operating system controls low level system resources and provides abstractions that make them more useful. But an operating system is itself a complexity that needs higher level tools built on it. We can look at kubernetes as a general purpose abstraction for cloud computing, and if that’s right then we can expect to see a growing ecosystem of new tools and platforms that use it under the hood without ever requiring engineers to think about pods and services. I know really good javascript programmers who have never had to deal with the operating system at its own level, and some day that may be true of engineers deploying workloads to the cloud too.

My answer, then, to the question posed in this post’s title is “No, kubernetes is not too complicated for what it does.” Is it too complicated for general use? Maybe it is, in the same way that an operating system is too complicated for “general use”. That doesn’t mean we don’t need kubernetes. It just means we’re not done creating useful abstractions, and likely never will be, because brains and time are the two things we can’t easily get more of.

--

--

Mark Betz

Senior Devops Engineer at Olark, husband, father of three smart kids, two unruly dogs, and a resentful cat.