Scott Aaronson Says Complexity Theory is ‘Inextricable’ from Quantum Computing

Qiskit
Qiskit
Published in
9 min readAug 2, 2022

By Robert Davis, Technical Writer, IBM Quantum and Qiskit.

Researchers believe quantum computers will soon be able to solve certain problems more efficiently than classical computers. To measure and classify those efficiency gains, we rely on computational complexity theory, a branch of computer science that centers on measuring and comparing the computational effort that goes into solving different kinds of problems. Complexity theory gives us a useful shorthand for describing the various speedups and problem classes we encounter in computing, but it’s also much more than that. In fact, at least one leading expert argues that without complexity theory, we wouldn’t have quantum computing at all.

Scott Aaronson (Illustration by Joel Russell Huffman)

Scott Aaronson is a professor of computer science and director of the Quantum Information Center at the University of Texas at Austin, as well as one of the world’s foremost experts in quantum computing and computational complexity theory. Aaronson’s research has helped define some of the fundamental capabilities and limitations of quantum computers, and his work as a science communicator has helped popularize quantum and drawn more people to the quantum community via projects like the Complexity Zoo wiki and his Shtetl-Optimized blog. In an interview, Aaronson spoke about the foundational role computational complexity theory has played in the history of quantum computing.

“I think the centrality of complexity theory to the subject has been there since the beginning,” Aaronson said. “You need complexity theory — even just to articulate why a quantum computer would be better than a classical computer at all.”

A brief overview of computational complexity

To better understand Aaronson’s point, let’s take a brief look at Shor’s algorithm. We often say that a quantum computer running Shor’s algorithm could theoretically find the factors of large integers much faster than any known classical technique, but that’s pretty vague. Complexity theory helps us characterize that difference in greater detail: We can instead say “Shor’s algorithm finds the factors of large integers in polynomial time, while all known classical algorithms require exponential time.”

These two concepts — polynomial time and exponential time — are examples of “time complexities,” which bound the number of elementary computational steps required to solve different kinds of problems. Time complexity allows us to see how the number of required steps increases as a problem grows in size.

For example, a polynomial-time algorithm might have a time complexity described by a function like . If we feed that algorithm a problem of size n=5, we know we can complete the problem in just 25 steps (5²=25). If n=10, we need 100 steps. On the other hand, an exponential-time algorithm might have a time complexity like 2. This means that if n=5 we’ll need 32 steps (2⁵=32), but if n=10 then we already need 1,024 computational steps to solve the problem. This is why polynomial-time algorithms are preferred over exponential-time algorithms: the number of steps they require scales much less quickly.

We use time complexities to organize the different kinds of problems we can solve with a computer — from basic multiplication to theoretical problems we can hardly imagine — into “complexity classes.”

There is an endless array of complexity classes, and many of them overlap. Some of the most famous include P, which is the class of all problems that a classical computer can solve in polynomial time, and NP, which is the class of all problems whose solutions can be checked by a classical computer in polynomial time, even if they cannot be solved in polynomial time. Quantum complexity theory — the branch of complexity theory concerned with quantum computing — also gives us BQP, which is the class of all problems a quantum computer can solve in polynomial time with less than a set amount of error. BQP includes all of P, as well as some special problems like integer factorization, which researchers currently believe falls into BQP and NP, but not P.

For more background on complexity theory in quantum computing, and on complexity classes like “NP-complete” and “PSPACE” depicted in the image above, be sure to read our previous post, “What Can a Quantum Computer Actually Do?

Complexity classes like these underlie some of the most intriguing mysteries in all of computer science. For example, the diagram above indicates that P is just a small portion of NP. Most theorists believe that to be the case, but it’s also possible that P and NP are the same — meaning that there are polynomial-time algorithms for solving any problem whose solution can be checked in polynomial time; we just haven’t found them all yet.

This question is known as P versus NP, and Scott Aaronson says it’s one of the things that first drew him to complexity theory.

“When I was 14 or 15, I learned about the P versus NP problem and it just completely entranced me,” he said. “I probably was not the first nor the last teenager to learn about P versus NP and have the fantasy that I’m going to solve this.” Aaronson says it didn’t take long before he was disabused of that notion.

Complexity theory gives us a framework for thinking systematically about which computational problems go into which complexity classes, and for investigating the relationships between those classes. But Aaronson says complexity theory doesn’t just impact the way we think about computational problems; it also affects the way we think about computing itself. And when it comes to quantum computing, he credits complexity theory for furnishing key insights that led theorists to establish a formal theory of quantum computation in the first place.

Bridging the gap between quantum theory and quantum computation

Quantum computers were perhaps first officially proposed by physicist Richard Feynman over four decades ago, when he suggested them as a means of simulating physical systems. Some may be impressed to learn that researchers were thinking about quantum computing that long ago, but Aaronson thinks the birth of quantum computational theory was long overdue.

That’s because quantum mechanics — the backbone of quantum computation — had existed for more than 50 years prior to Feynman’s proposal, originating in the mid-1920s. Classical computability theory emerged soon after in the 1930s, with Alan Turing first describing his Turing machine in 1936.

“So…why did it take so long for anyone to come up with the idea of quantum computing?” Aaronson said. “Why did it take until the early 1980s, basically, for anyone to think of that?”

It’s possible this gap was caused in part by the early deaths of thinkers like Turing and John Von Neumann — the latter of whom was crucial in developing a thorough mathematical framework for quantum mechanics. Both figures died before 1960, and Turing lived to be just 41 years old. “Maybe they would have gotten around to [quantum computing] if they had lived longer,” Aaronson said.

But Aaronson thinks there’s another reason no one proposed the concept of quantum computing prior to the 1980s.

“I think a key part of the answer to that question is that, until the 60s and 70s, people did not have a clear concept of complexity theory,” Aaronson said. “This distinction between what is solvable in polynomial time and what requires exponential time is a really fundamental distinction — at least as fundamental as the distinction between what is computable and what is uncomputable… You really need it to be second nature before you even ask the question of how does quantum mechanics help with computation.”

The most fundamental computer

The distinction between polynomial and exponential time complexities wasn’t the only concept researchers needed in order to develop a theory of quantum computation. Aaronson says they also needed to settle a fundamental question about the relationship between complexity theory and reality itself. Quantum computing harnesses the laws of quantum mechanics to perform computations, but it took a key insight in complexity theory to see why quantum mechanics should have anything to do with computation at all.

As evidence of this disconnect, Aaronson pointed to a 2011 New Yorker profile of physicist David Deutsch, widely regarded alongside Richard Feynman as one of the founders of quantum computational theory.

Speaking to the New Yorker, Deutsch recalled an exchange with IBM Research Fellow Charlie Bennett that played a crucial role in helping him establish a formal theory of quantum computation. At the time, Deutsch believed the then-nascent field of computational complexity offered little value. He argued that complexity theory could only describe a problem’s complexity relative to a particular computer system — not in any fundamental, universal way.

In other words, “Deutsch was poo-pooing complexity theory, and saying that it all just depends on your model of computation,” Aaronson said.

Bennett countered that there was a fundamental, universal computer that complexity theory could describe: physics. After all, computers are physical systems, and their capabilities are governed by the laws of physics. This would indicate that complexity theory isn’t just limited to describing the fundamental properties of specific computer systems, but could also serve to describe the most fundamental properties of the universe, and of computation itself.

This led Deutsch to an important insight: The problem with complexity theory wasn’t that it was limited to describing particular computer systems; it was just that no one had yet described it in its most fundamental terms. Alan Turing had intended for the Turing machine to serve as a “universal” model of computation — a model that described all possible computing devices, and which could serve as a basis for disciplines like complexity theory. However, in truth, he had only arrived at an approximation of universal computation based on the laws of classical physics. A truly universal model of computation would be one based on the most fundamental laws of nature — quantum mechanics.

This insight inspired Deutsch to develop the quantum Turing machine, a formalized model of quantum computation he would outline in an influential 1985 paper. By establishing a truly universal model of computation, Deutsch helped to demonstrate that complexity theory had a much deeper meaning and ultimately much broader applications than anyone had yet realized.

“Deutsch cited that conversation [with Bennett] as his jumping off point for inventing the quantum Turing machine,” Aaronson said. “So why is complexity theory important for quantum computation? I mean, it’s been an inextricable part of it from the very beginning.”

Complexity theory and the future of quantum algorithms

Deutsch’s quantum Turing machine was the first precise mathematical model of a physical quantum computer. His work played a foundational role in the landmark 1993 paper by computer scientists Ethan Bernstein and Umesh Vazirani, “Quantum Complexity Theory,” which defined quantum complexity classes like BQP and several other key quantum computing concepts we still use today.

The Bernstein-Vazirani paper helped formalize quantum complexity theory as a branch of computational complexity in much the same way that Deutsch’s work helped formalize quantum computation as a branch of computer science. This set the stage for the very first quantum algorithms — including Shor’s algorithm, which was published in 1994, just a year after Bernstein and Vazirani’s quantum complexity theory paper.

“Shor’s algorithm builds on an earlier quantum algorithm called Simon’s algorithm,” Aaronson said, “Simon in turn is building on the paper by Bernstein and Vazirani… So, Shor’s algorithm came about a year after people defined quantum polynomial time as a complexity class and started really thinking about this systematically.”

Quantum computing began its long road toward practical application in 1997, when researchers at IBM-Almaden became some of the first to run a simplified version of Shor’s algorithm on real quantum hardware. Since then, quantum computing technology has advanced tremendously, and the advent of accessible, cloud-based quantum computing has empowered researchers to go beyond pure theory and begin running interesting experiments on quantum hardware.

However, even as the quantum community shifts more and more of its attention to finding real-world applications for quantum computers, Aaronson says quantum complexity theory still has much to contribute to the field. For one thing, it can help us better understand just how much work we have to do before we achieve practical quantum advantage.

“To run anything on a fault tolerant quantum computer at all, you’re going to be paying an enormous overhead, right, at least with any of the currently known schemes,” Aaronson said. “So whatever quantum speed-up you’re getting, it has to be immense enough to cancel that overhead that you’re paying just to run a full-power quantum computation at all.”

Complexity theory helps us understand what we need to achieve in terms of error correction and other protocol overhead to realize different kinds of quantum speedups. “The ultimate goal of quantum computing is to use it for valuable, real-world applications,” Aaronson said. “But with all the hype that surrounds this field, we need complexity theory to keep us honest about how much work we still have to do before we get there.”

--

--

Qiskit
Qiskit

An open source quantum computing framework for writing quantum experiments and applications