Cerebras trounces Moore’s Law with first working wafer-scale chip

The race is on for other wafer-scale chips, smaller nanometer semiconductor nodes, and larger wafers. Is Moore’s Law accelerating and have we reached a tipping point?

Eric Martin
Predict

--

Eugenio Culurciello, a fellow at chipmaker Micron who has worked on chip designs for AI but was not involved in the project, calls the scale and ambition of Cerebras’ chip “crazy.”

The quote above is from the Wired article: “To Power AI, This Startup Built a Really, Really Big Chip” by Tom Simonite

Cerebras’ new development of an artificial intelligence (AI) chip with 1.2 trillion transistors confirms this. If the trend this chip has caused continues, Moore’s Law will have accelerated substantially. Note the next largest transistor count for a microprocessor according to Wikipedia is 39.54 billion transistors from AMD’s Epyc Rome processor. That’s 30.3 times less transistors from the next largest chip. Also note, however, that Wikipedia does not seem to consider the Cerebras chip a microproccessor, but the feat is impressive nontheless. Here’s a transistor count chart:

Transistor Count per Microprocessor by Year, 1971–2018. Logarithmic scale. Data from Wikipedia. Created by Eric Martin.

Here’s the same chart with me adding an approximation of where the Cerebras chip is for 2019:

Transistor Count per Microprocessor by Year, 1971–2019 with the 2019 Cerebras chip manually added in an estimated position. Logarithmic scale. Data from Wikipedia. Created by Eric Martin.

This chip truly defies odds, defies Moore’s Law, and blows the red trendline out of the water. It could be THE tipping point in AI. What do I mean by that? I mean that 10 years from now, historians may look at this chip as the chip that ushered in the era of AI. This chip could be the chip that ushers in a new AI Moore’s Law, that sees transistors per chip quadruple every two years, instead of doubling every two years.

How will a quadrupling every two years happen? Because the Cerebras chip, a custom AI chip is being made on such a small semiconductor process node, and using essentially the whole wafer available to it, our AI’s are now able to learn at a speed never before achieved, and at lower power. Energy usage and the speed of training AI models are two of the biggest bottlenecks to AI right now. Cerebras’ chip leapfrogs those problems so that our AI can be more powerful: arguably powerful enough to start helping us design new manufacturing capabilities for chips, including new process nodes, new, larger wafers, and even the intricate software and hardware that comprises cutting edge AI. With a chip this powerful helping us develop the next chips and the next AI, quadrupling transistor counts every two years doesn’t seem improbable.

Ray Kurzweil thought the exponential growth of computing power would remain constant. Perhaps he missed the tipping point effect where AI could actually substantially speed up Moore’s Law. The chip from Cerebras will be for sale in a server built around it. I bet Google, who Ray Kurzweil works for, will buy one or more of these chips (or the whole company) and remain on top of the AI world.

--

--