Innovator’s dilemma - Arm Vs x86 architecture

Samuel Agozie
6 min readNov 11, 2022

The technology S curve is an innovation that has left the technology world in wonderment relative to its materialization. The father of disruptive innovation, through the technology S curve, held out some resounding claims. Basically, every technology has a life span which is sectioned into two: the beginning and early stages, where rapid enhancement provides a trajectory of promising future for the technology; and the latter and ending stages, where the technology’s improvement drastically slows down while it slowly reaches its peak. Across three varying industries, Clayton Christensen laid out the foundation for failure and success in the face of disruptive innovation which he stated, all followed a similar pattern.

It begins as something that does not meet market demand, where new market for the innovation “emerges through trial and error”. As a result, the big firms within the industry with the fat pockets fail to see it viable enough for investment for two reasons: firstly, the market for such innovation is uncertain and hence, secondly, profit margins won’t be promising enough to sustain growth for these big players. These create a hole for small companies who are usually entrants to fill the space while capitalizing on this new innovation to meet the needs of consumers at the lower segment of the market. The big players join the train when this new technology has matured enough to serve a bigger market segment that can adequately cater for their growth.

Most of the time, these big players are coerced to hop on to this technology just to ensure their survival in the industry. The cycle is on constant iteration, may be slow, may be invisibly embedded in the business process, but one thing is for certain — it never stops. From the outskirt, it sounds quite baffling to see big men (management) play the same cards over and over again and watch them crumble while making the same mistakes. Looking at this in lieu, this does not come as a surprise because these old cards are the paradigms of sound management. Arguably, playing safe would be a more advisable approach to tackling business decisions with uncertain outcomes. As pointed earlier, playing safe is what kills them. “Paradigms of sound management are useless and even counterproductive in the face of disruptive technology”.

To address the main issue, we move from the straggle to concentrate on the Technology S Curve for x86 computing architecture. From its early birth through the breakaway from Fairchild Semiconductor, Intel has unprecedentedly enjoyed monopoly in the microprocessor industry more than any other company has (making an average of $13 million annual increment within the span of 13 years from 2009 to 2021). This level of monopoly posits a problem I chanced upon while reading Steve Jobs’ biography. The creator is the last to see beyond his latest product. This is the playground for easily sweeping big firms off their feet. This, I also believe, may be a major catalyst for Clayton Christensen’s theory of technology mudslide where firms use whatever comes their way in order to stay on top of this mudslide; stopping to catch a breath will cause your death. From rejecting partnering deals of manufacturing processors for the iPhone to yearly improvements of microprocessor performance through periodic launches of new generations, it would be hard to tell a different future for micro processing power rather than the obvious. Along came Arm.

Over the past few years with Intel’s remarkable performance with microprocessors, one thing can certainly be observed — more processing power equal more heat production. The very high-end chips belong to the desktop class of computers. This statement does not underrate the marvelous work of Intel CPUs in the mobile gaming space but one ideal concern remain, how powerful are these processors without being plugged into a power outlet?

Geekbench scores have consistently outdoored the almost halved processing power of laptops on battery power even with newer generation processors. It doesn’t even stop there with the enormous heat and fan noise you have to deal with when these high-end computers are running heavy workloads (dear Intel, how do I define efficiency in this context?).

ARM1, the first official product of the Reduced Instruction Set Computing (RISC) technology was introduced in 1985. This technology found favour in the market of mobile computers, which were no big deal as compared to today. As a matter of fact, the big boys in the microprocessor industry would only stride Intel and AMD in the PC space. Arm technology has never been a threat to these big boys because it was only until a decade ago before we realized this technology as an integral part of today’s mobile computing. Based on recent report, Arm-based mobile computing chip market (including smartphones, tablets and notebook PCs) have grown over 27% recording a total of $35.1 billion in 2021. This has made it a big deal. Still in the personal computing space with a current 10% share in PCs, the Arm technology has not been a big deal up until the launch of Apple’s in-house designed silicon chip which leverages the Arm technology. There’s no denying the fact that Microsoft did a good work making Arm based computers with some of their surface computer lines which has done a remarkable work boosting consumer confidence in pure windows experience. Nevertheless, the introduction of Apple’s M1 and M2 powered macs have provided the trajectory for technological advances yet to be seen in this piece. As it stands now, the future looks bright for Arm technology in personal computing.

This has left me in utter confusion, even though I try to understand this whole situation at a point. How has the PC industry been that blind to the power of RISC? How is Microsoft’s surface computers not reporting the same performance trajectories as those of Apple’s M1 and M2 chips? Is it just unrivalled skill of R&D in the Apple camp that sets this performance apart from other Arm based processors in the industry?

Unlike the x86 counterpart, the silicon chips barely lose processing power even on battery power. On top of that, multiple tech reviews have pointed out the fact that the fans barely kick in even on intensive workloads (not to disregard some heating issues especially with desktop class workload). I have watched a comparison video where the M2 MacBook Air outperformed the Dell XPS 13 plus in all its glory and still died out while the MacBook still held an amazing 40% charge. It sounds more intriguing to know that this is just the beginning of another long Apple rein in the productivity workspace.

In a sense, the new Arm-based silicone chips takes up the position of a disruptive technology. It began as a chip for mobile devices, incapable of desktop class performance, developing a new market for smart devices as its strong hold, currently outperforming some high-performance computers even at its early stages of introduction.

With regard to the technology S curve, unless Intel has something up their sleeves, it sure does look like the x86 architecture is slowly reaching its peak while the Arm just passed the sweet spot point of inflection for Intel’s processing architecture. To defend its customer base, it makes sense that Intel would, as usual, periodically launch new iterations of their CPUs with 20% to 30% better performance over their latest flagship on the market. The disruptive performance from Apple’s latest devices got me asking, “is x86 really dead”? That I leave for the reader to ponder thoroughly on.

I may be a dabbler, who doesn’t have extensive knowledge about micro processing architecture. Maybe, I could be looking at this whole idea from an arguable standpoint, what then is the missing silver lining?

Sources:
Innovator’s dilemma — Clayton Christensen
Steve Jobs — Walter Isaacson
Macro trends
Strategy Analytics

--

--