Custom Chips — Moore’s Law isn’t Dead

Saad Imran
May 22 · 4 min read

Back in August last year I shared an idea on twitter about a trend I was observing at the intersection of compute chips-sets, AI and general vs. specific computing:

Full thread: https://twitter.com/xsaadimran/status/1034161713807904768

A mega-trend apparent in our world today is the increased use of software for machine-learning/data-processing as billions of terabytes of data is being collected and translated into valuable information. Or in other words:

“Software is eating the world” — Marc Andreessen.

However, if software really is eating the world, it’s going to need massive amounts of compute power to do it!

Increased demand for silicon is nothing new, in fact, it’s been an trend for quite sometime now powering both our personal and hand-held computers. Companies such as Intel, Qualcomm, AMD & Nvidia have designed and supplied these chips as general-purpose computing powerhouses to their customers around the world manufacturing both laptops and cell-phones. Further, the advent of cloud computing has allowed this massive demand for compute power to be channeled into more efficient infrastructures in the form of data centers hosted by the likes of Amazon, Google & Microsoft that have also been powered by general-purpose chips from the group above.

What is new, however, is an interesting trend in which companies are beginning to realize that the general purpose chips may no longer be sufficient for their massive, often specific, computational needs. Using an external supplier for what is essentially the heart of your products/services reduces your edge over your competition. As a result, many of these companies have begun to design their own versions of these chips in-house instead, targeting them to use-cases specific to their products/services resulting in performance advantages and cost efficiencies.

The first great example of specific-computing through custom chip-set development is at Apple. The iPhone switch to a in-house design System on a chip (SoC) in 2010 — the Apple A4. Since then, the chip design team at Apple has shown tremendous improvement in the performance of the iPhone relative to its peers. In fact, the latest A12 chip is currently two years ahead of the competition and reportedly delivering close to desktop CPU performance in benchmarks.

Source: https://www.anandtech.com/show/13614/arm-delivers-on-cortex-a76-promises

Apple’s move highlighted to the market the competitive advantage of investing in a team for in-house design of their chips. The harmony they achieved by designing their chips for their specific compute needs was reflected in both the performance and efficiency of their devices. Furthermore, for a market that has a short product cycle such as cell-phones, it also highlighted the advantage of producing new chip-sets on your own timeline instead of being handcuffed to the supplier’s release schedule.

Since Apple’s move into designing their own chip-sets, the competition has followed suit with both Samsung (Exynos series) and Huawei (Kirin series) designing their own versions to outperform the general purpose chip-set produced by Qualcomm (Snapdragon series) as illustrated in the chart above.

Another major move came from Tesla, who recently announced a custom-designed AI chip for full self-driving capabilities in their vehicles. This new chip-set is reportedly 21x more powerful than the previous set and significantly more efficient.

Source: https://twitter.com/Tesla/status/1120480482540630022

Similar to Apple, Tesla opted to design their own chip-set to beat their competition in the autonomous vehicles market with the chip being touted as a crucial component towards full-self driving capabilities. The general purpose chips produced by Nvidia for their competition results in a range of features to cater to multiple customers and restricted Tesla to the suppliers’ timelines.

Lastly, this trend also extends to the cloud computing world where all three market leaders have their own custom chip-sets in use at their data centers as well over general purpose chips offered by Intel/AMD/Nvidia to build a competitive advantage on both cost and efficiency (Amazon, Microsoft & Google).

I think this trend is a net positive for the market. On surface, it may appear inefficient for manufacturers/software teams to start investing in chip-design with plenty of suppliers available to choose from. However, their output will result in tremendous return on investment with Apple proving this point thus far.

As software demands increase building specified chip-sets will improve the efficiency and output of machines running the software. With software and hardware teams working together under one roof, catering to specific needs outlined by company’s mission, more cohesive products are built. As a result, I expect the net compute output of our world to continue growing exponentially — debunking “Moore’s law is dead” claims. The most compute intensive applications such as those in AI-based software will benefit tremendously from this trend accelerating our ability to produce more of these applications.

This will likely hurt the general-purpose chip designers such as Nvidia and Intel if they can’t adapt to this trend, and the companies have already guided down in their latest earnings reports to reflect this trend.

On the flip-side, it also presents an opportunity in companies such as Cadence Design Systems and Synopsys who both offer products (in the form of software) for custom chip design.

Bonus: Facebook, Baidu, Alibaba and Amazon’s Alexa team are also joining in on the custom-chip ‘party’ specifically for AI applications.