High Performance Computing is More Parallel Than Ever

The end of Moore’s Law and the explosion of machine learning is driving growth in parallel computing, so what is it?

Tyler Elliot Bettilyon
Teb’s Lab
Published in
9 min readDec 20, 2018

--

In 1965 Gordon Moore made the observation that became known as Moore’s Law. Specifically, he noted that in the years between 1959 and 1965 the number of electronic components we could fit on any unit area of a silicon chip had doubled every 18 months. Moore’s observation became a prediction, then it became an expectation, and for the years between 1965 and 2012 it was perhaps the guiding principle of the computer hardware industry. Components got smaller at exponential speed and the ultimate result was that CPU speeds doubled every 18 months.

During the era of Moore’s Law a common mantra in the software world has been, “the programmers time is more valuable than the computer’s time.” This mentality has brought plenty of wonderful things with it including dynamic and expressive languages such as Python and JavaScript. These languages create programs that — compared to their equivalents written in C — are slow and use lots of memory. These trade offs were easy to make when computers would be twice as fast, have more RAM, and larger CPU caches in a year and a half. These advances have also brought down the cost of creating software and made it much easier for beginners to learn how to program.

But Moore’s Law is dead.

--

--

Tyler Elliot Bettilyon
Teb’s Lab

A curious human on a quest to watch the world learn. I teach computer programming and write about software’s overlap with society and politics. www.tebs-lab.com