The Second Computing Revolution — The Rise of Quantum Computing and Other Hardware Architectures

Dominik Andrzejczuk
Aug 13 · 6 min read

The First Computing Revolution

Since the advent of the first computing revolution, and the invention of the Complementary metal–oxide–semiconductor (CMOS) and integrated circuit, humanity has made strides in technological advancement that were previously in the realm of sheer imagination. These strides have helped us explore the deepest depths of our solar system, flattened our world through the invention of GPS and the internet and helped us to better understand genomics and their role in evolution and drug discovery. Moreover, the real power of the Integrated Circuit, originally developed at Fairchild Semiconductor in the 1960s was the phenomenon of Moore’s Law, or the ability to double the number of transistors on a Die Shrink, approximately every two years. This meant that a computer’s theoretical ability to solve computational problems increases exponentially, as a function of linear time growth.

But Moore’s Law is not what it used to be. According to David Patterson, University of California Professor, Google Engineer and RISC Pioneer, “single program performance only grew 3 percent.” “We’re at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend’s new computer was so much faster.” Mr. Patterson is calling for new Hardware and Software Architectures, if we want to preserve the prior trajectory of Moore’s Law.

As an example on the software side, Patterson indicated that rewriting Python into C gets you a 50x speedup in performance. Add in various optimization techniques and the speedup increases dramatically. It wouldn’t be too much of a stretch, he indicated, “to make an improvement of a factor of 1,000 in Python.”

Chip manfacturers today have focused mostly on increasing the total number of cores, giving CPUs more parallel processing power. As indicated in the graph above, Power, Frequency and Single Thread Performance have begun to taper off from 2000–2010. You’ll also notice that the total number of cores has increased, to compensate for these restrictions.

The restrictions themselves are both economic and physical. According to Robert Colwell, director of the Microsystems Technology Office at the Defense Advanced Research Projects Agency, “The silicon business is incredibly expensive for folks like Intel, who have to pay huge amounts of money [$6–8 Billion] to develop the next-generation silicon technology.” These ever growing constraints are putting a noose around the market, and slowing down the acceleration of technological innovation.

The Second Computing Revolution

At a conference in 1981, renowned physicist Richard Feynman urged the world to build a Quantum Computer. He said “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

Quantum Computing

Dr. Feynman was absolutely correct; classical computers are based on “bits” of information (represented as 0 or 1). If one were to design a computer that was fundamentally quantum mechanical, then the fundamental unit of information should exhibit quantum mechanical behaviors. The problem with the classical bit is that it can only physically represent a single representation of a unit of information (in this case, a 0 or 1). A quantum mechanical particle however exhibits strange behaviors, such as Quantum Superposition, or the ability to represent itself in multiple states at the same time. If this sounds strange to you, perhaps you should watch a quick video on the famous Double Slit Experiment to better understand the laws of physics that govern the strange subatomic world. Thus a quantum bit (or qubit) should possess the properties of a quantum mechanical particle (i.e. represent itself as a 0, a 1 or both 0 and 1 simultaneously).

Today’s quantum computers express qubits in five different ways:

  • Superconducting
  • Trapped Ions
  • Neutral Atoms in an Optical Lattice
  • Topological
  • Photonics

The jury is still out as to which approach will be the standard, but researchers in both the public and private sectors are making advancements every day.

Optical Computing

Optical computers use photons of light, as it’s unit of information, as opposed to electrons.

Photons have a particularly interesting physical advantages over electrons, in that photons are massless and do not interact with their surrounding environment. CPUs generate loads of heat, due to the friction of electrons moving across transistors. This is also the reason why CPU clock speeds have plateaued at the 5 GHz mark, given that overheating can damage and melt the chips.

Photons can also be encoded with additional information, increasing the maximum packet size of a photon. Photons can have varying amplitudes, frequencies and even certain quantum properties such as spin and momentum.

Optical based computers today are based on complex networks of lasers, lenses and sensors, organized in a way to mimic an analogue linear algebraic abacus.

Fathom Computing, out of Palo Alto California has already built a first prototype that is powered entirely by light. Not only do these computers promise to increase computational capacity by orders of magnitude, but they also operate at a fraction of the power their silicon counterparts consume. This notion opens new doors in the realm of autonomous vehicles, who require massive amounts of local computation that is both excruciatingly fast, and anything but power hungry.

Application Specific Silicon

On May 23rd, 2018, Intel revealed its latest Neural Net N-1000 Processor, designed specifically for accelerated AI training. Cerebras Systems out of Los Altos California has raised upwards of $100 Million in financing, to fund its own microprocessor designed specifically for Artificial Intelligence applications.

The rise in AI is quickly “decommoditizing” the silicon chip industry. New chip architectures can dramatically speed up certain types of algorithms by orders of magnitude. This in turn, drives value, and new chip manufacturers are capitalizing on the arbitrage.

Companies attacking the chip market are making the case that AI will perform lightyears faster on specialized silicon. The most likely candidate is ASICs (application-specific integrated circuit), which can be highly optimized to perform a specific task. If you think about chips as a progression from generic to specialized, the spectrum includes CPUs on the one side, then GPUs and FPGAs in the middle, and then ASICs at the other extreme.

CPUs are very efficient at performing highly-complex operations — essentially the opposite of the specific type of math that underpins deep learning training and inference. The new entrants are betting on ASICs because they can be designed at the chip level to handle a high volume of simple tasks. The board can be dedicated to a set of narrow functions — in this case, sparse matrix multiplication, with a high degree of parallelism. Even FPGAs, which are designed to be programmable and thus slightly more generalized, are hindered by their implicit versatility.

Conclusion

The demand for higher performance hardware architectures is driving innovation in the hardware space. What used to be a treacherous vertical for investors to invest in is once again becoming more attractive. The 2020’s will surely be the decade in which new hardware platforms will undergo a renaissance.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade