Forget Moore’s Law
Addressable compute is shrinking.
The principal that compute will double every 18 months, also know as Moore’s Law, has been incredibly robust for a prediction about tech. We watched this prophecy come true first in personal computers, then mobile, and now IoT.
The tech community has internalized this hypothesis. At a very fundamental level, we expect what we’re using today will be faster tomorrow.
But if you talk to a cloud application engineer, or a devops engineer, or a VM or compiler engineer, you’ll hear about a wealth of work and concerns over the ever shrinking resources we have for applications to live in.
“Addressable compute,” the actual resources available to the applications people write, is shrinking. It has been shrinking for some time now. Developers are spending more time on performance, making their application as efficient as possible, to address the ever shrinking space they get to live in.
10 years is a long time.
Get in your time machine and take a trip back to 2007. I don’t think anyone you find will believe that VM engineers in 2017 spend most of their time shaving microseconds off tiny operations. Nor will they believe that startup time is a reason people pick one programming language over another.
In 2007, they’ve been watching single processor performance double on a regular 18 month schedule. Why would anyone spend six months shaving 5% off of an operation when the processor is going to make it twice as fast in another year? In 2007 multi-core processors are certainly obvious but the widely accepted consensus seems to be that we’ll be using threads to extend the power of our application across multiple cores in a single application process.
Virtualization, Containerization, and Serverless
And then the world changed: We were at the right place at the right time.
First, the cloud started cutting up computers into virtual computers. You didn’t buy a 32-core-rack-mount-server, you bought a slice of it that varied in capabilities based on your needs.
Then they started cutting up those virtual computers into containers, even smaller slices that were more templatized and much more disposable.
As if that wasn’t enough they are now creating “Serverless” environments. These are even smaller slices of containers that are custom tailored to run your application in a way that is easy to duplicate and scale out as the demands of the application expand.
Less is More
At the end of all this slicing and dicing we’re left with much less addressable compute for our applications than we’ve had in years.
What we trade for all this compute is elasticity. When we stopped racking our own servers we started paying for just the compute we needed. But for many years, we still had to over allocate. We still had a lot of unused capacity sitting idle most of the time to handle our peak usage. Finally, we can only pay for what we need.
Every time we shift away from one environment to the next it takes a while to figure out what we lost in the transition.
Debugging and monitoring have changed significantly and we’re still building and innovating on tools for these newer environments. As we build and improve these tools we’ll be eating up even more of the resources that were formerly available to our applications.
The hardest thing our industry will have to contend with is the reversal of our most deeply held belief: that what I have today will be faster tomorrow.