Stadia: a glimpse into the mechanics of the next 50 years of tech innovation

How a truly cloud-OS could provide the same outcomes that Moore’s Law had promised — but cannot deliver anymore

Edo Scalafiotti
Techfare
8 min readMar 21, 2019

--

Photo by Hardik Pandya on Unsplash

Google unveiled its new Stadia game streaming service on the 21st of March in San Francisco. The service has been a long kept open secret in the industry for over 2 years now, with former gaming executives being poached and game developers being hired in the background. This is not an analysis of the service or of its potential repercussions on the gaming industry (assuming that the technical barriers around latency that broke OnLive over 10 years ago have now being overcomed — and that’s a big assumption), but it’s instead a reflection on the tech industry as a whole.

The hype on Tech is a bubble ready to burst

Let me disclose immediately my views on tech: I might be guilty of confirmation bias since in my day-to-day role I try to de-hype and separate the magic (aka the marketing PR) from the actual technology. With mixed results.

Overall, it’s been more than 200 years that, as a society, we’ve been confusing evolution with progress, progress with growth and growth with well-being. Infinite growth, endless progress, always towards something better, faster, cheaper. Capitalism has definitely reinforced this view — not that the alternative promised anything different. Ironically, the very same forces of conservative capitalism that requires endless growth in the future are the same that boasts about the good ol’ days, where everything was ordered and less chaotic — allegedly. At the same time, the naive and limited views of Silicon Valley’s founders who only saw good in humanity (possibly because they were looking at it from the backlight of billions of cash) have lost control of their tech juggernauts which, if anything, are moving the world back into fragmented, fenced and biased-reinforced communities. But we digress.

Nature has limits

The point that I’m trying to make is, nature has limits. Nature likes limits. There is a specific reason for why a car cannot reach certain speeds at sea-level: the quadratic nature of drag makes so that for growing speeds, a car would need infinite power to move through air (but it would have been flattened out by the downforce and pulverized by heat first anyway). Hence, we know that we cannot go faster than X km/h at sea level, unless… unless something radical gets discovered or variables are changed (for example, we go to space, where there is no drag). But there is no magic, no matter what any click-baity headlines would like you to believe (to be fair, they don’t care about your beliefs, as soon as you’ve clicked the page, you’ve performed your duty, so move along, please).

The growth of the digital economy is linked to continuous miniaturization

I strongly believe that the entire post-war economical and technical boom that we are still experiencing today — from the invention of the first vacuum tube up until Uber Eat— has mostly been supported by a hardware backbone, and specifically by miniaturisation. That is to say, by optical optimisations and lasers.

A microchip is nothing more than a very high-definition photography of an electrical circuit that gets impressed via special lenses and very specific light frequencies on a silicon wafer. Given a fixed radius of the wafer, the more microchips one can put on one, the less expensive it will be to produce: in short, the smallest one can print, the cheaper a microchip becomes. It’s a bit more complex than that, but not that much… Overall, this mechanic has been known as the Moore Law, which is not a law at all, it’s just an observation: by halving the size of a microchip, costs follow suite. By halving the size while keeping the old microchip dimension, processing power doubles. Processing speed — or how many operations a chip can perform in one second — is another variable that affects the overall microchip performance. However, there are limits. Nature likes limits.

We cannot “print” too small: theoretically, we cannot go smaller that an atom — even if what’s “moving” inside the microchip are electrons. Also, we cannot transfer an image smaller than the lightwave that imprints it, and light has a very well-known, defined spectrum. In practice, we need to stop way before that theoretical limit: if the walls of the “electric highways” are too thin, an electron starts to jump in and out, a quantum effect known (but not remotely understood) as tunneling. This is why any CPU is error-corrected, that is, an operation is checked and processed again if an error is spotted — maybe an electron decided to “jump” in the middle of a calculation. By going too small, the error rate becomes so high — that is to say, the electrons that randomly starts to jump — that the amount of power required to spot errors becomes higher than the amount of power needed to process anything else. On the other hand, we could process more operations per second (technically, clock speed), so if we double the speed, we have doubled the processing power. Unfortunately, electrons that move around tend to cause friction and therefore heat (that’s how an oven works), so again, at certain speeds the chip melts. Nature likes limits.

Miniaturization has stopped: keep calm, and carry on

It’s 2019 and unfortunately, we have kinda reached both limits. The speed limit was actually reached over 10 years ago: have you noticed that no one is promoting Ghz speed anymore? That’s because we have plateaud, sometimes around 2005. Miniaturization has also, almost reached its limits with the latest 7nm chips: further reductions in size will require more expensive manufacturing processes and therefore will cause the cost per unit to increase, and not the contrary. I’m consciously skipping all the possible optimizations around efficient memory caches, out of order executions, multi-core and so on.

My point being: decreasing size of hardware made possible dramatic cost reductions, which in turns enabled humanity to carry a supercomputer in their pocket. This generated cascading new business models, disrupted entire industries (pick your favourite) and gave birth to company valuations the likes of which we have rarely seen in history — although, one should never forget that the West India Company was valued at over $7.9 trillion (adj. inflation), yes, with a “t”, so basically Amazon, Apple and Google combined are but peanuts in comparison. We’ve already been there.

But back to hardware. We are feeling already the consequences of a miniaturization slow-down: we’ve actually been feeling them for quite a while now, but as an unconscious ill patient we still think that everything is alright. Well, it ain’t. Smartphone costs are rising, and have been rising for a while now: instead of having a 20$ smartphone, we have a 2000$ iPhone: in other words, companies are trying to feed their shareholders growth expectations by rising prices. And that’s — partially — because hardware costs are not diminishing as they have done in the past. However, there is a limit one can go: higher prices means less adoption, less volume and so on. Nature has limits.

What if we could force Moore’s Law to continue?

So, let’s recap: we cannot go smaller, and we cannot go faster. The days of better, faster, cheaper have ended. Full stop — no, no, please, please don’t name Quantum Computers… just… please (but if you can’t resist, this is a good read on the subject). However, let’s think of it in another way and understand what would the world look like if in the next 50 years we could theoretically continue ad-infinitum Moore’s Law. In other words, if in 50 years, all humanity could carry around in their pockets the combined processing power of the AWS cloud. What could be possible with such a device? What type of businesses could be built on that?

Stadia and the new computational paradigm

But more importantly, what if such a device could actually exist right now? Enter the likes of Google Stadia and the final fulfillment of the AWS’ vision: the cloud is to become the Operating System of the web. Distributed, horizontally scalable, energy efficient and easily optimizable processing power. This is where I personally think that the next innovations will come out of: logically centralised processing power in connected data centers, streaming only the User Interface to cheap devices. If devices would start to remove components instead of adding them, batteries could support more on-time and processors would just become glorified video decoders, at best. That’s 1990’s processing power. Chromebooks had that vision, but it stalled: their recent success is likely caused by combining the Chrome OS with Android and Linux. A bit of a mish-mash that the upcoming Fuchsia OS will allegedly solve. But the Chrome OS was based on another assumption, or upon the web, and more specifically, that HTML and JavaScript would become the standard to build applications. And seen the relative inefficiency of JavaScript, that assumption could only have been supported on the conviction that smaller, faster and cheaper hardware would have continued to be produced in the future to power the ever more power-hungrier Chromebooks. That future doesn’t exist anymore. But a cloud Operating System that doesn’t discriminate between a tablet, a smartphone or a tower gaming rig providing the same amount of computational power to all… well, that’s entirely a new paradigm. And one that is more complex to realise than anyone thinks. We’re not talking about a glorified remote desktop here: a true cloud OS is an architectural shift that no current operating system is capable of delivering. But that’s a software problem, not a hardware one. Software has less limits than hardware. Even Nature knows that (I’m not sure if she likes it but hey ho).

The big unknown to the future of unlimited processing power with cheap, sub 50$ hardware is, of course, connectivity. And connectivity is where I’m leaving you now — I have to go to work, or at least that’s what my cloud-powered personal assistant tells me. Just some finishing thoughts:

  • 5G is all but certain, especially because of the microwave nature of the technology that requires an insane amount of hardware to be deployed (microwaves do not pass through walls… oops). 5G has the potential to deliver this future, but I fear that what we’ll get in the next 5–7 years is just a marginally faster 4G, rebranded for the occasion and to justify the higher price-point. Let’s just hope that those prices will serve to build the hardware infrastructure required for true 5G and won’t be redistributed as dividends or bonuses. One can dream
  • Google, Amazon and Microsoft cloud businesses have built and are building what amounts to separate internets. The reach and the distribution of their data centers (and proprietary, undersea fiber optic cables) is such that if your ISP were to only restrict you from an AWS-only internet, you’ll probably barely notice. The implications of this are still unclear, although the possibility of the internet breaking up into several fenced internets is a real possibility. China might be considered a separate internet already, and another shock such as a cyber attack from a foreign power might be the last drop to make this happen
  • A truly-cloud OS will render connectivity, and its underlying network, a core part of the picture, and could resuscitate the network operators out of their current utility-status. However, what could actually happen is for the cloud providers to vertically integrate thus replacing completely the ISPs once their cables and hardware are strategically layed. If the old operators stay still and underinvest thus threatening the cloud providers’ vision, it will probably be just a matter of time before they fade away for good

--

--

Edo Scalafiotti
Techfare

“Cooper, this is no time for caution!” I work for @AWSCloud & my opinions are my own