The Matrix will run on custom hardware

In 2006, Amazon EC2 came along and changed the way we do things. Some of the most celebrated and embraced features of IaaS such as elasticity, pay as you go, t-shirt size infrastructure, automation, API interfaces and the like are well understood. I don’t see those core tenets of IaaS changing anytime soon.

But there was another feature which is less broadly discussed. The shift to x86 architectures. Suddenly we didn’t have a choice when it came to exploiting specific hardware capability. It was x86 or not at all. This has created inertia around some application types as they are hard coupled to specific chips and device requirements. Network devices are an obvious example. In the old world, they would run on hardware that had all manner of ASICs to support specific functionality. SDN and NFV that isn’t provided by the cloud provider directly, but via the ISV ecosystem, is now dependent on virtual appliances running on x86 based virtual machines.

To be fair, we’ve seen cloud providers offer up GPU technology, and we’ve seen some of them offer a hardware virtualization model which does expose some of the underlying chip capabilities, but for the most part, we’re still running on Intel based x86 architectures.

But is this about to change? In a previous post covering punctuated equilibrium in cloud, I wrote about the TPU processor that Google has brought us via its cloud platform. It’s already on V2 of the design, and it is showing great promise for running specialist workload types.

Recently, ARM announced its new A75 chip which will allow AI to run on devices at greater performance whilst being very power efficient.

The Internet of Things, and Machine Intelligence are two disruptive technology areas which have the ability to disrupt the current status quo — that being, the standardization onto x86. I feel that for many years, at least from an IaaS point of view, organizations have been seeing this as a positive move, and looking for investment and opportunity to shift their workloads away from custom hardware and software — for example, Itanium, HP UX — and onto an x86 architecture. In my experience, this is normally coupled with a move to open source software initiative, the most obvious being the swing to open source Linux.

The benefits of IoT running on custom hardware that offers the benefits of the A75 ARM chip, and MI running on TPU processors, are compelling enough to start a swing for multiple workload types back to destinations that support custom hardware. The big cloud providers have been customizing hardware for some time and offering it back to us abstracted behind cloud services. Today I’d suggest this is mostly manifested in the network layer, and we get all sorts of cool cloud network functions that are supported behind the scenes by custom hardware designs. But will we start to see some of that hardware exposed to allow us to drop our applications on top, and get direct access to the chip specific benefits? This would be a huge market for ISVs to build improved virtualized appliances that are no longer constrained to x86 VMs. The benefit to the end consumer of the cloud service would be huge, and I feel innovation in this space has been sorely lacking. As we see the growth of edge computing, I feel the design will be very much predicated on custom hardware designs, and the ARM A75 chip is a great example of a design that will be prevalent in the hardware architectures, at least, I’m sure that’s what ARM are hoping.

As we look forward and see all this coming together, the Matrix (cloud, IoT, MI, and so on) will demand more specialist hardware as running it all centralized, on x86 architectures, will constrain it too much. I believe we are about to see the battle for application run-time venue be dominated by big cloud vendors differentiating on custom hardware.

Watch this space, I certainly am.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.