Intelligence is a Memory Game
Designing processors for AI is a lot about dealing with memory
Computer architecture might feel like an arcane subject but it’s a hot filed these days. It deals with how best to design a computer. The answer to that question depends on what you want to do with the computer. Intel dominates the CPU market because CPUs are useful in PCs.
Nvidia dominates graphics, because GPUs are good at dealing with linear algebra math which is key in dealing with images. Nvidia has also conquered the early machine learning market because linear algebra happens to be at the core of modern machine learning algorithms.
Given the market cap of Nvidia, which is roughly 100 Bio. $, it’s not surprising that lots of startups and large competitors are trying to cut a piece out of this pie for themselves. Intel is trying something with CPUs which are configured to behave more like GPUs, Qualcomm and AMD are doing work on AI chips and so are others.
Google has designed an special purpose chip they call Tensor Processing Unit, which according to Google is much better at solving certain AI problems. Recently there was news about a Series B funding round of a British startup called Graphcore, which is working on a similar problem. They call their chip Intelligence Processing Unit or IPU.
It’s tempting to jump on the bandwagon and fund any engineer who comes up with some three letter acronym to supposedly solve the future AI computing challenge. And like in the art world, the technology world follows a simple rule: “If you missed a current trend, don’t dwell on it, find something new and declare it better than the status quo.” In other words, always talk about the next thing, don’t waste time with what is hot right now.
Right now, Nvidia is the hot thing. The company is seeing traction in many workloads dealing with AI and gaming. The AI portion is still small but growing fast. Here are our thoughts on how we see this space and what factors we’re looking at.
- Memory management is key
The path to better chip design is better memory management since that is the bottleneck. Think of it this way. The processors are super fast but memory is slow and the communication between processors and memory is slow. So, in order to improve performance, find better ways to deal with memory.
AI happens to be memory intensive, so you need to find better ways to deal with memory. New chip design has memory management at its core.
AI is memory intensive because the processors constantly have to write and read data in memory. While the processors are fast they need to wait for memory to write and read values all the time. That is the bottleneck. There are lots of ideas how to improve memory management. Some are based on hardware and some on software, most incorporate both. Improving memory management is key and companies like Graphcore are at the forefront of this.
2. Adoption by developers
Nvidia is successful because they managed to spread their CUDA software framework amongst AI practitioners. In other words, anybody who works in academia and/or Silicon Valley is using CUDA to program GPUs. That is a huge advantage since that creates network effects which are crucial in adoption of new technology.
It’s perfectly possible that new processors do much better work but get overlocked because nobody wants to invest the time to learn how to program them. For newcomers compatibility with standard software packages will be key.
Semiconductors is a fixed cost business. That means that scale brings down cost dramatically. That’s probably the main reason why there are so few large players in this space. Startups have a hard time scaling up to drive down cost.
Startups like Graphcore will have to overcome all three hurdles to succeed in the marketplace. Others like Google, Microsoft or Amazon might not care so much about competing with Nvidia and sell chips to other AI players. They just want better processing solutions for their specific problems. They probably also don’t want to depend solely on Nvidia and are trying to build an ecosystem. The best case for Google & Co. is a large amount of startups competing fiercely on price and hence driving down the cost for AI processing chips.
We see Nvidia, Intel, Qualcomm and AMD as best positioned to take away the AI market. While all of them will probably find ways to improve performance through better memory management, only Nvidia enjoys the benefit of network effects through CUDA. Hence, that is the field where the others have to do more work.
Startups like Graphcore will most likely play for an acquisition. That means, they will superior performance in the lab and then test case it with real costumers. But in order to drive down cost they will have to sell to a large player. There is the option to go with ARM way, that is stay completely fabless and spread the design solution across the entire space. It took ARM 30 years to do that with mobile chips. Hence, M&A might be more lucrative for startups in the short run.