Hello everyone! We are overwhelmed by the interest and support we received for our flagship CS-1 product announcement at Supercomputing last week. And we remain grateful to Argonne National Lab for being our first customer and sharing the story of both the workloads they are accelerating with the CS-1 and their datacenter deployment experience.

We had a fantastic turn out at Supercomputing — the booth was bustling all day, every day! We enjoyed hosting and answering questions from interested parties about the features of the CS-1, and the whys and hows of the engineering challenges overcome in building it.

Image for post
Image for post

Greetings! Today I am proud and honored to announce that Argonne National Laboratory (ANL), one of the nation’s premier research centers, is the first customer to deploy the Cerebras CS-1 system. This is the result of nearly two years of deep collaboration, and it is extremely fulfilling to us in our mission as a company that the CS-1 is being used for such diverse purposes as understanding cancer drug response rates, traumatic brain injury, gravitational wave detection and parameter estimation, material science and physics — just to name a few.

Image for post
Image for post

We are fortunate to have great partners at ANL like Rick Stevens, the Associate Laboratory Director for Computing, Environment and Life Sciences, Tom Brettin, the Strategic Program Manager focusing on the development of projects at the intersection of genomics, artificial intelligence, and leadership scale computing, and Hyunseung (Harry) Yoo, the lead researcher for the very first customer model to run on the CS-1, which investigates tumor cells’ response to different drug treatments. The ultimate goal of this project is to develop personalized treatment plans for cancer patients based on their genomic make-up, thus improving survival rates. Check out the video below to see what we’ve been up to together. …

Image for post
Image for post
The Cerebras CS-1, the industry’s fastest AI computer

Back in August, Cerebras took the industry by storm by announcing the Wafer-Scale Engine (WSE). The WSE is the largest commercial chip ever manufactured, and the industry’s first wafer-scale processor, built from the ground up to solve the problem of deep learning compute. It consists of 1.2 trillion transistors, packed onto a single chip with 400,000 AI-optimized cores, connected by a 100Pbit/s interconnect. The cores are fed by 18 GB of super-fast, on-chip memory, with an unprecedented 9 PB/s of memory bandwidth.

The WSE is but one part of our AI compute solution. Today, I am proud to unveil the Cerebras CS-1, the world’s fastest deep learning compute . Along with the WSE and the Cerebras Software platform, the CS-1 is a comprehensive high performance AI compute solution, one of a kind in the industry. …

Image for post
Image for post

On October 31st at TensorFlow World, my colleague Manjunath Kudlur and I were honored to speak publicly for the first time about the Cerebras software stack. This stack links deep learning researchers to the massive compute capabilities of the Wafer Scale Engine (WSE).

The WSE (pronounced “wise”) is the largest commercial chip ever manufactured, built to solve the problem of deep learning compute. The WSE is 1.2 trillion transistors, packed onto a single chip with 400,000 AI-optimized cores, connected by a 100Pbit/s interconnect.

Dhiraj Mallick, VP Engineering and Business Development at Cerebras Systems

Image for post
Image for post

I had the pleasure of sharing the story of our collaboration with TSMC to develop innovative technologies that enabled the WSE, a wafer scale chip that is optimized for artificial intelligence workloads.

WSE is the world’s first commercial wafer scale chip, and we couldn’t have accomplished this without TSMC’s invaluable partnership and excellent engineering.

Here’s a link to video of my talk, hope you enjoy it.

-Dhiraj

We are always looking for extraordinary team members to join us on our journey to change compute forever. Take a look at our careers page for more details, and follow us on medium and our blog for future updates.

Originally published at https://www.cerebras.net on October 11, 2019.

Andrew Feldman, CEO of Cerebras Systems

Image for post
Image for post

This month, we at Cerebras Systems continued to build on the momentum from our first public reveal of the world’s largest chip, the Wafer Scale Engine (WSE) at HotChips 2019.

I am proud to announce our first customers, Argonne National Laboratory and Lawrence Livermore National Laboratory. We’ve commenced on a multi-year partnership with these U.S. Department of Energy National Labs to advance deep learning for basic and applied science, and medicine.

The opportunity to incorporate the largest and fastest AI chip ever-the Cerebras WSE-into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering and health. …

What is wafer-scale integration?

Let’s begin with the concepts of wafers and integration — the basics of chip making.

Silicon chips are made in chip fabrication facilities, or fabs, owned by Intel, or Taiwan Semiconductor (TSMC), Samsung, or a few other companies. A chip fab is a sort of printing press. The electronic circuits of a processor or memory or any other computer chip are printed onto a thin circular disk of silicon. The disk is called a wafer, and it plays the role of the paper in this printing process. (Fabs are uber-fancy printing presses costing billions, using photolithography, chemical deposition and etching to do the printing, in super-clean rooms run by employees who have to wear bunny suits. …

The Need for Speed

Deep learning has emerged as the most important computational workload of our generation. Tasks that historically were the sole domain of humans are now routinely performed by computers at human or superhuman levels.

Deep learning is also profoundly computationally intensive. A recent report by OpenAI showed that, between 2012 and 2018, the compute used to train the largest models increased by 300,000X. In other words, AI computing is growing 25,000X faster than Moore’s law at its peak.

To meet the growing computational requirements of AI, Cerebras has designed and manufactured the largest chip ever built. The Cerebras Wafer Scale Engine (WSE) is 46,225 millimeters square, contains more than 1.2 …

This past Monday, August 19, I was proud and excited to reveal the Cerebras Wafer Scale Engine (WSE) at my HotChips talk.

The WSE (pronounced “wise”) is the largest commercial chip ever manufactured, built to solve the problem of deep learning compute. The WSE is 1.2 trillion transistors, packed onto a single 215mm x 215mm chip with 400,000 AI-optimized cores, connected by a 100Pbit/s interconnect. The cores are fed by 18 GB of super-fast, on-chip memory, with an unprecedented 9 PB/s of memory bandwidth.

Why does this matter? We believe that deep learning is the most important computational workload of our time. Its requirements are unique and demand is growing at an unprecedented rate. Large training tasks often require peta- or even exa-scale compute: it commonly takes days or even months to train large models with today’s processors. …

Image for post
Image for post

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types who love doing fearless engineering. We have come together to build a new class of computer to accelerate deep learning.

Today, I’m excited to introduce the first element of the Cerebras solution — the Cerebras Wafer Scale Engine, the largest chip in the world and the heart of our deep learning system.

In the last few years, deep learning has risen as one of the most important workloads of our time. …

Cerebras Systems

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store