Stumbling into Computing Abundance

What’s next when computing power is so vast and cheap we will never run out of it…

George Baily
10 min readAug 20, 2014

In the 1988 movie Rain Man, Tom Cruise’s character Charlie Babbitt discovers that his autistic brother Raymond, among other things, has genius mental calculation skills. Charlie quickly realizes that he can “use” Raymond for profit by card-counting in Blackjack in Las Vegas.

Raymond’s mental powers were amazing, but he himself did not have any use for his brain apart from obsessive counting and a formidable mental library of trivia quiz answers. It took his brother to come up with an application of this processing power and exploit it for profit.

This is exactly what is now happening in the world of computing. Computers have a lot of power — way more than people realize — and they are mostly just sitting there unexploited — and the ones who are profiting are the ones who figure out uses for that power.

The new question of computing is not how much power you have at what cost ($ and kWh), but rather how much you can use of the vast and almost-free power available to you.

This will be evident in whatever device through which you are currently reading this article. Without a doubt, you are not using anywhere close to the computer’s processing or memory capabilities in order to consume this information, and you likely never use its full power except in small bursts of programs opening or performing some short task. Your device is certainly capable of running substantially more complex software than you are ever going to use it for. Even your dull office desktop PC can smoothly run resource-intensive 3D games which would have required an expensive “rig” just a few years ago.

Another example of computing power constantly expanding faster than our ability to use it up, and this will sound paradoxical, is the phenomenon of software bloat. Despite your current computer running Windows 7 or 8 being orders of magnitude faster than the first PC you used to run Windows 3.1, it does not boot up or open your word processing package that much faster, because the software is now way more complicated… a lot of which is bloat whose efficiency was pointless to try to improve beyond a certain satisficing level.

In other words, companies could always optimize software to perform a lot better on the given hardware (and in fact this was a huge concern for the work of early consumer desktop software developers, and game developers), but it is not commercially worth the time/cost investment of the optimization… it’s more rational to put out the inefficient, buggy, bloated code at the point the sheer power of the computer can still handle it with an acceptable functionality speed and interface responsiveness in the experience of the human user. I take software bloat as a sign of the increasing excessive power of computing available to us.

There is a pessimistic “Wirth’s Law” which, like a software version of Parkinson’s Law, basically says the bloated software will get slower at a greater rate than the hardware can improve to run it faster… but this is going to sound outdated soon. Computing power keeps going up (and costs down) exponentially, but software doesn’t really get more complex at the same rate. There is a slowing down of the range of features that humans are actually capable of interfacing with: for example how many more features need to be added to Microsoft Word to produce documents? Arguably the answer was “none” several years ago, yet the hardware power to run it continues to increase.

The growth of tablets and also the Chromebook are part of this phenomenon: the fundamental OS has become irrelevant for certain boundary of use-cases… i.e. the computer already does everything we want, mostly through the browser or a limited set of applications, so for the improvements we look instead at form factor, visuals, and power efficiency (which all in turn will be “solved”).

What we are facing now is what I call “stumbling into computing abundance”: we’ve become used to being constrained by clear limits to computing power either for consumer use or in business… the constraints basically being what we can afford to buy or rent. We are now going to stumble forwards because all these barriers — which we’ve been subconsciously “leaning on” — are being taken away… the limits are receding far into the distance: we will soon never be able to come up with processing or storage challenges in our normal businesses (let alone private applications) which can ever be constrained by the computing available, and any computing we can use will be free or close to free.

Computing scarcity truisms that will stop being familiar

Consider the following limitations that are disappearing:

“I’ve filled up my hard disk: I need to buy a bigger one”

→ Your storage will very soon be effectively infinite and at a cheaper and cheaper price that is barely related to the available space.

“I need to buy a faster computer to handle this game/application”

→ You may upgrade components such as screens and input devices, but your computing power will stop being a question… as I am arguing here, you will not be able to find any application that strains your computer… the new ‘problem’ is how to exploit the vast power and how to keep up with the opportunities — and competition — that are arising.

“Our firm doesn’t have the budget for global enterprise server coverage for our website / application”

→ All applications even at the cheapest level will have global coverage at effectively unlimited performance. This is part of what everyone is going on about with the word “cloud”.

“The new iPhone has a new A16 processor and 128GB biggest ever storage, oh wau, upgrade now!”

→ It will soon be an irrelevance to upgrade your phone or tablet based on processing power: the only upgrades after display and battery technologies are “solved” will be replacing worn-out or broken cases or parts, or simply upgrading for fashion reasons. The way mobile (and indeed desktop) OSs are now releasing the next versions as free upgrades without having to buy new hardware, proves this point.

Another important one: “big data” — this is a cliché that will very rapidly disappear from current use and appear outdated. Saying “big data” will sound like saying “small phone”… just an out-of-date standard of what to be impressed about. Why? Because the challenges that we in 2014 consider to be “big” will just be all in a day’s work for ordinary computing power.

A million database rows or a trillion? Who cares! Database query to crunch through exabytes of data? Not breaking a sweat.

(In fact, “big data” is already in large part a matter of hype, because many non-technical managers who think they have a Big Data issue can usually handle their database requirements on a single office-level PC… although the slick software salesman is not going to talk about that…)

If this sounds fanciful simply look at any sub-£500 consumer computer you can buy and you will find it can provide all the “enterprise” computing power a large company would have to pay tens of thousands of pounds for just a decade ago.

And that is before you connect it to a virtually unlimited resource of cloud processing and storage, so the question of the power of an individual PC unit is becoming moot anyway.

Logically the resources of computing are always going to be finite and come with costs. But human beings, and small groups of them called companies, are not going to be able to produce enough work to find the serious limitations of that power. Computer processing and data storage will cease to be a limiting factor to any endeavour until AI is develops (evolves…) to make use of the power, but that’s a whole other story…

General Aspects: Software and Arbitrage

Back to the ‘problem’ of abundance, here are two general aspects we will all face: first, everything becomes about software. Secondly, many business opportunities will be simply about arbitrage of knowledge differentials.

The first point, in other words, is that since hardware stops being a limiting factor, the only limit to what you can achieve is the software you can produce or put into play. The relatively simple layers of software things run on now will become massively deeper and more complex, as layers of abstraction build up.

An example is that a coder nowadays is not judged so much on his core knowledge of a programming language, but on his ability to exploit frameworks and toolkits that already exist to avoid reinventing the wheel constantly. Meanwhile, a server administrator is judged by his ability to know how to use the current best-practice tools, not his ability to look inside what that software is. These are layers of abstractions.

Another example is that operating systems are getting more and more virtualized, one OS sitting on top of another like Russian dolls (or more accurately like terraces of Buddhas in the many layers of the temple of Borobudur). If you are not already running various operating systems in virtual environments, you will be soon; arguably a lot of the deeper functionality of online software is like that already, i.e. quasi OS spaces through a browser window…being served of course by virtual servers…

We are not going to be impressed by “big” computing power or “big” data, but we are going to be impressed by clever uses of that power and that data. So the question of software is not just a functional topic but one of imagination, creativity, and practical business execution. Clearly, this pulls in many more different people to the world of “IT” than just coders and geeks.

Meanwhile, with my second general aspect of arbitrage, I mean that the Cambrian Explosion of software possibilities of which we are already in the early stages, will mean that no one human or even company can hope to keep up with all the range of current possibilities of software, let alone know how to use or exploit them, and let alone be able to predict future developments outside a very narrow area of expertise.

In fact, even the narrowly focused tech firms will usually find they are unwittingly developing solutions in parallel with others (like Newton and Leibniz) if not finding they are already behind (like Scott and Amundsen).

Looking at tech company success stories, it is often just a matter of who executed a good idea first, using knowledge that was not new or unique… or worse, who had the resources and luck to become known for a particular solution even when many alternatives — even better ones — also exist.

Looking at app stores, websites, OS distros, programming languages, CMS options, hosting companies etc etc — I am sure you would agree the range of choice in the market far exceeds the size of the market in terms of humans’ needs… in other words, however fast we succeed in exploiting software opportunities, it will always be a matter of too much supply for the demand… a major stress on business assumptions, as indeed we have seen with the rise of free and open source software…

The limiting factor in the computing abundance era is human cognition, so in this sense we are entering a new pre-telegraphic age, where information has hit a transport bottleneck. This is an important realization: information is now not limited by networks or computers or any other ways of conveying data: it is horribly limited just by humans. The obvious requirement is to remove the human requirement, but as I said, I am not going into AI here.

So arbitrage opportunities will be a significant type of business model, simply exploiting differentials of knowledge and competence in software applications. I would argue that most of the digital industries are already basically in this mode: most companies are not actually developing new technology solutions… simply educating and rolling out the existing technology to upgrade other companies, and all at a painfully slow pace I might add.

That slow pace is another significant feature of the computing abundance era. Wherever we are on the expertise spectrum, we won’t be working fast enough to take advantage. There will be a stretching of the spectrum from computer dunce to computer expert. Right now there are only a few fairly established layers of “leetness” from CPU or assembly or kernel developers on the leet end, to your grandma on the lame end. But the explosion of possibilities in software mean that nobody can be an expert at more than a small segment of it all, and the difference between a user and a programmer is at the same time becoming more blurred and further apart.

There will be an extreme elitism of the leet, and major problems of being left behind for the late adopters and luddites. Meanwhile, in the middle and even expert ends of the spectrum, people and firms will generally always feel they are seriously behind from where they want to be. This in turn will create huge demand for education and training, which in turn will create demand for ways to sift options and choose where to dedicate time (since anything you put time into studying might well turn out to be a dead end or rapidly obsolescent technology).

So to end this introduction to computing abundance, I challenge you to start constantly asking yourselves, as individuals, groups, and in business models, in what way do we currently behave as if computing power has a cost and power limitation, and as if those limitations will continue to exist… because those walls are falling away, fast.

How would you play Monopoly if the board went on forever and you had unlimited money? Hard to imagine but for sure you could not carry on just with the old rules of the game. Taking away limitations causes us to stumble, as creativity and knowledge levels are now the only factors…the new success factor is how well you can nurture and apply creativity using combinations of different people’s knowledge and skills.

A ranticle from @georgebaily

--

--

George Baily

Disseminating GeorgeThought™, Enlightening The Vast Hordes Of The Benighted