# What are quantum computers good for?

*Spoiler: they’re not meant to compute “1+1=2”.*

In the last 42 years, quantum computing has gone from theoretical speculations to the implementation of machines that can solve problems beyond what is possible with classical means. With recent attacks on the whole field being mere science fiction — in particular with the naïve criticism that today’s quantum computers “can’t even properly add 1 + 1” — a quick reminder of what quantum computers can actually do is in order. On the way, we’ll review 50 articles that made the history of quantum computing.

## From science fiction to experiment

In 1980, Benioff [1] takes the abstract definition of a computer and makes it physical: he designs a quantum mechanical system whose time evolution encodes the computation steps of a given Turing machine. In retrospect, this may be taken as the first proof that quantum mechanics can simulate classical computers. The same year, Manin [2] looks at the opposite direction: he argues that it should take exponential time for a classical computer to simulate a generic quantum system. Feynman [3, 4] comes to the same conclusion and suggests a way to simulate quantum mechanics much more efficiently: building a quantum computer!

So what are quantum computers good for? Feynman’s intuition gives us a first, trivial answer: at least quantum computers could simulate quantum mechanics efficiently. Deutsch [5] makes the question formal by defining quantum Turing machines and the circuit model. Deutsch and Jozsa [6] design the first quantum algorithm and prove that it solves *some* problem exponentially faster than any classical *deterministic *algorithm*. (*A classical *randomised* algorithm solves the problem in constant time with high probability.) Simon [7] improves on their result by designing a problem that a quantum computer can solve exponentially faster than any classical algorithm. Deutsch-Jozsa and Simon relied on oracles (i.e. a black box that can solve a certain problem in one step) and promises (the input is promised to satisfy a certain property, which may be hard to check) and their problems have little practical use. However, they inspired Shor’s algorithm [8] for prime factorisation and discrete logarithm. These two problems are believed to require exponential time for a classical computer and their hardness is at the basis of the public-key cryptography schemes currently used on the internet.

In 1997, Grover provides another application for quantum computers: “searching for a needle in a haystack’’ [9]. Given a function f : X → {0, 1} and the promise that there is a unique needle x ∈ X in the haystack with f(x) = 1, Grover’s algorithm finds x in a number of steps proportional to the square root of n = |X|, quadratically faster than the optimal linear-time classical algorithm. Grover’s algorithm may be used to brute-force symmetric cryptographic keys twice bigger than what is possible classically [10]. It can also be used to obtain quadratic speedups for the exhaustive search involved in the solution of NP-hard problems such as constraint satisfaction [11]. Independently, Bennett et al. [12] prove that Grover’s algorithm is in fact optimal, adding evidence to the conjecture that quantum computers cannot solve these NP-hard problems in polynomial time. Chuang et al. [13] give the first experimental demonstration of a quantum algorithm, running Grover’s algorithm on two qubits.

## Reading the fine print of quantum speedups

Shor’s and Grover’s discovery of the first real-world applications sparked a considerable interest in quantum computing. The core of these two algorithms has then been abstracted away in terms of two subroutines: phase estimation [14] and amplitude amplification [15], respectively. Making use of both these subroutines, the HHL algorithm [16] (named after its discoverers Harrow, Hassidim and Lloyd) tackles one of the most ubiquitous problems in scientific computing: solving systems of linear equations. Given a square n by n matrix A and an n-dimensional vector b, we want to find a vector x such that A x = b. Under some assumptions on the sparsity and the condition number of A, HHL finds (an approximation of) x in time logarithmic in n when a classical algorithm would take quadratic time simply to read the entries of A. This initiated a new wave of enthusiasm for quantum computing with the promise of exponential speedups for machine learning tasks such as regression [17], clustering [18], classification [19], dimensionality reduction [20] and recommendation [21]. The narrative is appealing: machine learning is about finding patterns in large amounts of data represented as high-dimensional vectors and tensors, which is precisely what quantum computers are good at. The argument can be formalised in terms of complexity theory: HHL is BQP-complete, the hardest class of problems that a quantum computer can solve with bounded error in polynomial time. Hence if there is an exponential advantage for quantum algorithms at all there must be one for HHL.

However, the exponential speedup of HHL comes with some caveats, thoroughly analysed by Aaronson’s *Read the fine print* [22]. Two of these challenges are common to many quantum algorithms:

1) the efficient encoding of classical data into quantum states and

2) the efficient extraction of classical data via quantum measurements. Indeed, what HHL really takes as input is not a vector but a quantum state called its amplitude encoding. Either the input vector has enough structure that we can describe this quantum state with a simple, explicit formula. This is the case for example in the calculation of electromagnetic scattering cross-sections [23]. Or we need to assume that our classical data has been loaded onto a quantum random-access memory (qRAM) [24] that can prepare the state in logarithmic time. Not only is qRAM a daunting challenge from an engineering point of view, in some cases it also requires too much error correction for the state preparation to be efficient [25]. Symmetrically, the output of HHL is not the solution vector itself, but a quantum state from which we can measure some observable. If preparing the input state requires a number of gates exponential in the number of qubits, or if we need exponentially many measurements of the output state to compute our classical result, then the quantum speedup disappears.

Shor, Grover and HHL all assume *fault-tolerant* quantum computers [26].

Indeed, any machine we can build will be subject to noise when performing quantum operations, errors are inevitable: we need an error correcting code that can correct these errors faster than they appear. This is the content of the *quantum threshold theorem *[27] which proves the possibility of fault-tolerant quantum computing given physical error rates below a certain threshold. One noteworthy example of such a quantum error correction scheme is Kitaev’s toric code [28] and the general idea of topological quantum computation [29] which offers the long-term hope for a quantum computer that is fault-tolerant “by its physical nature’’. However this hope relies on the existence of quasi-particles called Majorana zero-modes, which as of 2021 had yet to be experimentally demonstrated [30].

## The NISQ era and beyond

The road to large-scale fault-tolerant quantum computing will most likely be a long one. So in the meantime, what can we do with the noisy intermediate-scale quantum machines we have available today, in the so-called NISQ era [31]? Most answers involve a hybrid classical-quantum approach where a classical algorithm is used to optimise the preparation of quantum states [32].

Prominent examples include the quantum approximate optimisation algorithm (QAOA [33]) for combinatorial problems such as maximum cut and the variational quantum eigensolver (VQE [34]) for approximating the ground state of chemical systems. These variational algorithms depend on the choice of a parameterised quantum circuit called the *ansatz*, based on the structure of the problem and the resources available. Some families of ansätze such as instantaneous quantum polynomial-time (IQP) circuits are believed to be hard to simulate classically even at constant depth [35], opening the door to potentially near-term NISQ speedups.

Although the hybrid approach first appeared in the context of machine learning [36], the idea of using parameterised quantum circuits as machine learning models went mostly unnoticed for a decade [37]. It was rediscovered under the name of quantum neural networks [38] then implemented on two-qubits [39], generating a new wave of attention for quantum machine learning. The idea is straightforward:

1) encode the input vector x as a quantum state |x> via the ansatz of your choice,

2) initialise a random vector of parameters θ and encode it as a measurement M(θ), again via some choice of ansatz and

3) take the probability y = < x | M(θ) | x > as the prediction of the model.

A classical algorithm then uses this quantum prediction as a subroutine to find the optimal parameters θ* in some data-driven task such as classification.

One of the many challenges on the way to solving real-world problems with parameterised quantum circuits is the existence of *barren plateaus *[40]: with random circuits as ansatz, the probability of non-zero gradients is exponentially small in the number of qubits and our classical optimisation gets lost in a flat landscape. One cannot help but notice the striking similarity with the vanishing gradient problem for classical neural networks, formulated twenty years earlier [41]. Barren plateaus do not appear in circuits with enough structure such as quantum convolutional networks [42], they can also be mitigated by structured initialisation strategies [43]. Another direction is to avoid gradients altogether and use kernel methods [44]: instead of learning a measurement M(θ), we use our NISQ device to estimate the distance |< x’ | x >|² between pairs of input vectors x and x’ embedded in the high-dimensional Hilbert space of our ansatz. We then use a classical support vector machine to find the optimal hyperplane that separates our data, with theoretical guarantees to learn quantum models at least as good as the variational approach [45].

Random quantum circuits may be unsuitable for machine learning, but they play a crucial role in the quest for *quantum advantage*, the experimental demonstration of a quantum computer solving a task that cannot be solved by classical means in any reasonable time. We are back to Feynman’s original intuition: sampling from a random quantum circuit is the perfect candidate for such a task. The end of 2019 saw the first claim of such an advantage with a 53-qubit computer [46]. The claim was almost immediately contested by a classical simulation of 54 qubits in two and a half days [47] then in five minutes [48]. Zhong et al. [49] made a new claim with a 76-photon linear optical quantum computer followed by another with a 66-qubit computer [50]. They estimate that a classical simulation of the sampling task they completed in a couple of hours would take at least ten thousand years.

Now that quantum computers are being demonstrated to solve some (artificial) problem beyond what the best classical computers are capable of, the question remains: can we make them solve some *useful* problem? Answering this question can only be a collective endeavour: academics working on foundational research and its applications, startups and corporations building quantum machines and its software, the private sector and the public institutions that fund them. All the quantum ecosystem will need to work together for quantum computing to become a reality.

We expect this race towards quantum advantage only to keep accelerating in the years to come, with an ever-growing list of applications where quantum computing has the potential to be a game changer: computational chemistry [51] and drug discovery [52], portfolio optimisation [53] and natural language processing [54], energy optimisation [55] and carbon capture [56], to name but a few. The end goal is not so much to compute “1+1” faster, but rather to push the range of problems mankind can solve to the limits of what the laws of Nature allow.

*This article is adapted from the introduction of my PhD thesis “Category theory for quantum natural language processing” which is now available on the *arXiv*.*