Quantum Supremacy

A Nature Journal

Nicholas Teague
From the Diaries of John Henry
8 min readOct 26, 2019

--

Harvest — Neil Young

If you’ve been following ‘current’ events in quantum computing space (sorry bad obscure pun), you may be as excited as I am at an important announcement that a long sought milestone has finally been achieved in the field — the researchers in Google’s quantum computing team have just published a paper in the Nature journal (not my nature journal, the real one) demonstrating a validation of “quantum supremacy” for their hardware. (Of course this news was not a total shock as an early draft of the paper had been leaked a few weeks ago by some folks at NASA (not sure why, perhaps they wanted a few more eyeballs for peer review).) Anyhoo, quantum supremacy is kind of a big deal, as the term refers to a real quantum computer demonstrating experimentally validated speedups in the application of an algorithm that would otherwise be out of reach with classical computing resources.

There had actually been some ongoing debate about whether such a demonstration was even possible, which who knows could have been evidence of some unknown physical limits to computation. The Church-Turing thesis says that any real world computation can be translated into one performed by a Turing machine. Of course there are still plenty of problems that are outside the reach of Turing machines (the halting problem being the most infamous). This realization of quantum supremacy still falls within that constraint, for this demonstration of the hardware the researchers can and did validate via simulation the same calculations on Turing machine equivalent hardware, in this case a supercomputer, however the difference was in the efficiency. While IBM researchers countered the announcement with a claimed estimate that they could accomplish comparable simulations via classical resources on Summit, the world’s largest supercomputer, in just a few days of dedicated application (as opposed to the expected 10,000 years cited by the Nature paper on a supercomputer making use of a different simulation algorithm), such an implementation would still be validation of a quantum speedup, not to mention expected to take hundreds of thousands of terabytes of storage on hardware with footprint of like two basketball courts and I’m sure quite large energy expenditures, whereas a quantum computer is generally about the size of a walk-in refrigerator and with primary energy draw the refrigeration of qubits to supercooled state approaching absolute zero (haven’t seen formal analysis of this energy efficiency comparison but I would expect that it is material). There’s no violation of the Church-Turing thesis being demonstrated here, save for the violation of polynomial time overhead requirement of the Extended Church-Turing thesis.

Diving into the weeds for a minute, my understanding is that part of the advance in hardware that made this quantum supremacy demonstration possible was the refinement of interactions between adjacent qubits for this quantum computer chip architecture (dubbed by Google as ‘Sycamore’). To give you a little context, my understanding is that the qubits themselves are built in a 2D grid arrangement, with interconnections only between adjacent qubits. Thus one of the challenges for implementing any of the more sophisticated algorithms is that for multi-qubit gates intended for interactions between non-adjacent physical qubits, such interactions must be channeled through intermediate qubits by way of single qubit gates, each of which requires a corresponding intermediate time step to implement. In addition to the challenge of these overheads is the potential for “cross-talk” between adjacent qubits during gate application. It turns out one of the new innovations for this hardware, as opposed to the previous generation ‘Bristlecone’, was that interactions between adjacent physical qubits not needed for gate applications could be “turned off” during any given time step, thus opening the channel to a denser application of intermediate gates with fewer unintentional “bridges” of cross-talk between adjacent qubits.

As for the target algorithm of the demonstration, well let’s just say it wasn’t exactly intended to demonstrate usefulness, and I’ll clarify what that means. But as a metaphor consider when you’re trying to validate the emission performance of some internal combustion engine motor boat, you’re not going to like drive it up and down the river, instead you’re going to conduct an experiment in a controlled laboratory environment to facilitate precise validation of performance. (Of course the problem with such well defined experimental conditions is that they become easier targets for gaming the test, I mean a certain incident with diesel cars come to mind, but I’m just mentioning in the hypothetical sense I don’t think anyone suspects gaming of this quantum supremacy validation.) The test that was performed here was sort of an exercise of randomness, in that the algorithm applied was merely that, a random selection of quantum gates applied to the 53 qubit processor such as to prepare an unknown state that could then be extensively sampled from to determine distribution properties of returned measurements. The validation came from the simulation of that same set of random gate applications using classical computing resources (with their considerable overhead for this application) such as to validate the returned distribution properties of qubit measurements. That is not to say that these methods were without some usefulness, for example I think I heard some version of this experiment could serve as a resource for generating certifiable random seedings, hey it beats flipping a coin.

One important thing to keep in mind is that the demonstrated quantum supremacy is not exactly the highest threshold for quantum computing performance. Even though the researchers have shown a quantum speedup on their hardware verses what is available with classical resources, the implementation still has a fundamental limitation in that it is part of the NISQ paradigm, such acronym standing for Noisy Intermediate-Scale Quantum technology. By definition the NISQ paradigm (which so far is generalizable across various competing implemented styles of gate-based qubit architectures I believe) has limitations associated with potential depth of gate applications. With each gate applied or time step, a certain amount of decoherence is introduced, by which I mean a small amount of noise is leaked into the qubit superpositions. Part of the challenge of achieving scalable implementations beyond the 50–100 qubit configuration of current NISQ realizations is that the qubit and gate implementations will need to exceed some accuracy threshold (which value varies based on the style of qubit). Once a quantum computing architecture exceeds such an accuracy threshold, we will be able to implement an arbitrary number of qubits with decoherence recovered via quantum error correction, which will signal the graduation from the NISQ era to that of fault tolerant universal quantum computation, opening the door to wide applications such as all of those nifty algorithms we keep hearing about such as for search, factoring, quantum machine learning, etc.

Since have touched on the differentiation in this blog previously, let’s quickly talk about the implications for adiabatic quantum computing in relation to this demonstration of NISQ technology, based on a generous Q&A session offered by the researcher Scott Aaronson on his never boring blog Shtetl-Optimized. (I owe a thank you to Scott he actually was the one who inspired me to take on the writings of Roger Penrose a while back which has been a not insignificant source of inspiration in my (somewhat haphazard) explorations of theoretical physics in the time since.) Anyhoo as a quick rehash adiabatic quantum computers (like those adiabatic devices developed by D-Wave Systems for instance) differ from gate-based NISQ devices (like those NISQ devices developed by Rigetti Computing for instance) in that their primary application is a specialized type of optimization algorithm which makes use of a kind of quantum tunneling via adiabatic annealing, an alternative to the simulated annealing methods available with classical resources. (Just to give you an idea of potential applications for these optimization algorithms, and this is just one example, such type of sophisticated optimizations may be applied to select architectures and/or for hyperparameter tuning of a machine learning project for instance). Well my understanding is that the realization of quantum supremacy realized by the Google team does not directly imply quantum supremacy for the “stoquastic” adiabatic quantum computers (I’ll leave that term to the reader to research I’m not an expert), which have been the basis of D-Wave architectures until very recently for instance. That being said, there is a theorem by Aharonov et al that any quantum computation can be done adiabatically on a non-stoquastic adiabatic quantum computer. (Starting to get out of my depth here). So don’t rule out adiabatic quantum computers in the race for fault tolerance just yet.

It’s an exciting time for all of those quantum computing researchers that have been hard at work for a very long time trying to realize the until now only theoretical potential of quantum computing for applications beyond the reach of classical resources. When you hear stories of realizing quantum states that otherwise would take hundreds of thousands of terabytes to store, you might ask “well who cares if there’s a speedup, we just took a quantum state requiring the world’s largest supercomputer to store and implemented it on a single piece of hardware, that’s pretty supreme in itself, isn’t it?” (this was actually my initial thought when I saw those figures). Well keep in mind that this same representation could also be realized using just a pencil and a single sheet of paper by drawing a 53 qubit quantum circuit diagram. The supremacy is not just in the ability to store the state, it comes in the ability to perform computations on that basis. I’m mean I’m sure some of you are pretty good at math and all but at some point a pencil and paper probably just isn’t enough. :) Anyhoo exciting times I’m sure there’s more to come. Hopefully I’ll see some of you guys downstream.

Books that were referenced here or otherwise inspired this post:

Quantum Computation and Quantum Information — Michael Nielsen and Isaac Chuang

Quantum Computation and Quantum Information

Quantum Computing Since Democritus — Scott Aaronson

Quantum Computing Since Democritus

(As an Amazon Associate I earn from qualifying purchases.)

Albums that were referenced here or otherwise inspired this post:

Harvest — Neil Young

Harvest

(As an Amazon Associate I earn from qualifying purchases.)

For further readings please check out my Table of Contents, Book Recommendations, and Music Recommendations.

Oh yeah before you go, check out Automunge for automated preparation of tabular data for machine learning: automunge.com

--

--

Nicholas Teague
From the Diaries of John Henry

Writing for fun and because it helps me organize my thoughts. I also write software to prepare data for machine learning at automunge.com. Consistently unique.