Cerebras

Eclipse Ventures
Aug 19 · 2 min read

Pierre Lamond

Cerebras Systems has built the world’s biggest chip specifically to meet the demands of AI

In the late 1970s, I sat down with technologist and entrepreneur, Gene Amdahl to discuss the prospect of building a “super-chip.” Gene, like myself and others with a history of chip design, knew that using a whole wafer, via a method known as wafer scale integration, or WSI, would vastly improve performance. Ultimately, despite Gene’s best efforts, his work on WSI was unsuccessful. The hypothesis was correct, but at that time there were simply too many uncharted, fundamental technical impediments to make wafer scale integration a reality.

Three decades later, I sat down with another entrepreneur, Andrew Feldman, and to my surprise, had a similar discussion. I’ve known Andrew for many years — I was an investor in his previous company, (SeaMicro which was bought by AMD) and have always admired his passion for tackling deeply technical problems and his ability to build world-class teams. Andrew told me he wanted to build a chip. A very big chip. A chip that could meet the needs of the AI community which was — and still is -repurposing graphics processors to meet new compute needs. You see, big chips process information more quickly and produce answers in less time. This is incredibly important for researchers crunching enormous amounts of data, it enables them to train models faster, test new ideas, and ultimately solve previously unsolvable problems. A chip like that could fuel unprecedented discovery, and the only way to build it was through wafer scale integration. It was a huge endeavor.

Andrew and the Cerebras team have built that chip. Having successfully navigated issues of yield, power delivery, cross-reticle connectivity, packaging, and more, today they unveil the Cerebras Wafer Scale Engine (WSE), aka, the largest chip ever built. And it’s remarkable. With a 1,000x performance improvement over what’s currently available, the Cerebras WSE is comprised of more than 1.2 trillion transistors and is 46,225 square millimeters. It also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. For comparison, the first chips I created in the 1960s featured a few hundred transistors.

As an early employee at Fairchild Semiconductor, and co-founder of National Semiconductor, I saw the unprecedented impact of the industry’s shift from transistor to integrated circuit. Today, we’re at a similar inflection point for compute. The world is waiting for AI to fulfill its potential and that can only happen with a dedicated chip. A chip designed from the ground up for AI work. Every once in a while, a technology company comes along and defines a generation:

Cerebras will define the AI generation.

Eclipse Ventures

Eclipse Ventures News and Insights

Eclipse Ventures

Written by

Full-Stack Venture Capital

Eclipse Ventures

Eclipse Ventures News and Insights

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade