The Coming Innovation Explosion That Will Be Driven by A.I. Hardware

Rob May
4 min readSep 23, 2017

This is republished from InsideAI, my weekly newsletter.

It is time to start placing your bets on A.I. hardware.

Adoption patterns are a huge part of the success of any technology. And, as a result, innovation in the tech sector often happens in these oscillating waves of consolidation followed by decentralization. Think mainframe->client/server->web->cloud->mobile. The whole idea of what parts of a system you should run where, and why, keeps shifting based on what part of the technology stack is seeing the most innovation and where the current bottlenecks are.

In the early days of the web, AOL was the thing. AOL had topical pages for things like business, news, sports, etc. Some users stayed on those pages in AOL’s walled garden and never realized there was a broader web. Google made that broader web more accessible, and you may remember a time before the aggregators arose to re-centralize everything when you actually had to bookmark sites, or use blogrolls, or a RSS reader. Then social media and aggregators changed that again.

This happens in part because when there is centralization on one part of the technology stack, it enables an explosion of innovation around it because everyone can write for a common standard, and so developers, users, everyone understands how to use the main thing. The iphone/android duopoloy is a recent example that led to the explosion of mobile apps. If you tried to build a mobile app before those days, say, in 2004, you probably had to port it to at least 5–6 platforms, and it was a bunch of extra work, and they were platforms you’ve never heard of like Symbian.

Of course the canonical example is when Windows centralized computing OSs, it similarly enabled an explosion of apps for PCs. But the downside of this is always that the core centralization point becomes less innovative. It becomes a point of stability in the system, not of creativity. And for several decades now we’ve been in a world where x86 and ARM architectures have driven almost everything in processing. It’s been good because it has enabled a multi-decade explosion in software. But we have hit some limits. And I don’t mean Moore’s Law. I mean there are just some things that are easier to do when you have a different hardware architecture. The rise of GPU programming has proven this.

To me, Microsoft using FPGAs in their data centers, IBMs work on its TrueNorth chip, and Google developing their own TPU chips is evidence that we are at the beginning of a wave that will move away from one centralized hardware architecture. This wave will focus on chips that take the bottlenecks in current software and make them optimized. I expect to see more companies developing chips optimized for spiking neuron models, genetic algorithms, proabilistic programming, analog neural systems, and of course — deep learning. This may fragment the software tool stack at some point, but, the software industry is so much bigger than it was when the x86 architecture came along that I think there is enough development bandwidth for multiple major players to simultaneously move forward with multiple tools stacks on various chips.

Chip architectures can be optimized for several different things: speed, power consumption, footprint, reprogrammability, or cost. Chip development cycle times are much much much longer than software, so, expect to see most of the innovation out of big companies are very well funded startups. You can’t do this with a seed round.

It’s going to be really interesting to see how companies place their bets on which combination of chip parameters are most important to optimize for, and why. I don’t have any good thoughts on this yet, but, will try to formulate some as I watch and think through it. But what I do think is that all of these new chips that will come up in the next few years will lead to a performance explosion in A.I. and also a bunch of new ideas that will help keep the A.I. boom going.

Expect to see some of the more obscure hardware researchers who have been doing this for decades suddenly become A.I. rockstars in 2019/2020, much the way Yann Lecun did when, after toiling away on convolutional neural nets for years when everyone thought he was crazy, he was finally vindicated a few years ago when CNNs proved wildly successful on image competitions.

The fragmentation though, will mean that software and application providers all have to pick sides and place bets, which is always a very exciting time in the industry. The biggest of the big companies can afford to be wrong and just buy the competition, but, for everyone else, these could be boom or bust moments.

It’s an exciting time, and we are only in the first inning of A.I. hardware, but it is time to start paying attention. Oh, and if you have an A.I. hardware company, email me. I’d love to take a look at investing :)

--

--

Rob May

CTO/Founder at Dianthus, Author of a Machine Intelligence newsletter at inside.com/ai, former CEO at Talla and Backupify.