Everything you always wanted to know about quantum-inspired algorithms

Xanadu
XanaduAI
8 min readMay 28, 2019

--

By Juan Miguel Arrazola

Here’s an idea for the next Hollywood blockbuster.

The setting: A global arms race is underway. World governments and corporations have recruited an elite group of scientists, supported by millions of dollars in funding, to develop an advanced technology that will change the world as we know it. The stakes are high: the winners will gain immense wealth and power. Who will emerge victorious?

The plot: Somewhere in Texas, a university professor is suspicious: could world dominance be obtained using cleverness instead of these sophisticated machines? He meets an 18-year-old student interested in mathematics, and tasks her with this quest. After working alone for years, she sends global shockwaves by showing that what hundreds of scientists were aiming to achieve, she could do all along using only her laptop and her brain. Companies go bankrupt, governments topple, and researchers are left looking for new jobs.

Truth is stranger than fiction. The storyline above is inspired by a true story that captured the attention of the quantum computing community: the “dequantization” of quantum algorithms for linear algebra, ignited by Ewin Tang’s algorithm for recommendation systems. See Scott Aaronson’s blog post for a portrayal of the true events. It truly made a good headline. But in the aftermath of the events, there was a state of relative confusion. How do these new quantum-inspired algorithms work? Are they any good in practice? Are quantum algorithms for linear algebra dead?

The aim of this post, which accompanies our technical paper, is to answer these questions. If you’re short on time, here’s the executive summary:

In practice, the performance of quantum-inspired algorithms is better than their complexity bounds would suggest. They can perform reasonably well compared to existing classical techniques, but only provided that stringent conditions are met: low rank, low condition number, and extremely large dimension of the input matrix. These conditions almost never occur simultaneously in practice. By contrast, there exist many practical datasets for linear algebra problems that are sparse and high-rank, precisely the type that can be handled by quantum algorithms. Whether quantum algorithms for linear algebra will ever be impactful in practice remains to be seen, but if they fail to do so, it almost certainly won’t be because they are outperformed by these quantum-inspired algorithms.

If you’re curious to understand quantum-inspired algorithms better, don’t go anywhere! The rest of this blog post is for you. We’ll give a short and simple explanation of the algorithms; they turn out to be quite easy to understand. We’ll also comment on their performance: how do they improve on previous classical methods and when are these improvements relevant. In the end, we’ll give you a quick lesson on how to use our publicly-available code to implement the algorithms yourself. Let’s start!

How do quantum-inspired algorithms work?

We’ll focus on the algorithm for solving linear systems of equations. A very similar description applies to the algorithm for recommendation systems. We are given an input matrix A and vector b, and the goal is to solve the linear system of equations Ax=b. Quantum-inspired algorithms tackle this problem in three main steps:

  1. Compute an approximate singular value decomposition of A using the Frieze-Kannan-Vempala algorithm. The outputs of the algorithm are approximations to the singular values and singular vectors of A.
  2. Use sampling techniques to estimate the coefficients of the solution vector x in the basis of singular vectors of A.
  3. Sample entries of the solution vector in such a way that large entries appear with high probability.

That’s really all there is to it. For recommendation systems and other linear algebra problems, the only differences are what we mean by solution vector and how its coefficients are defined. Let’s take a look at each step in more detail.

The FKV algorithm

This is the core, the essence, the magic, of quantum-inspired algorithms. The idea is simple, performing a singular value decomposition of a large matrix is very hard work. Could we somehow get away with doing it for a much smaller matrix? The result of FKV is to show that such laziness can indeed be rewarded if the input matrix has low rank. All we have to do is to randomly select a small number of rows and columns of a large matrix and rescale them by multiplying by an appropriate constant to build a smaller matrix. If we perform a singular value decomposition of the smaller matrix, it will directly give us an approximation of the singular values and a method to compute approximate singular vectors of the larger input matrix. The trick is that rows and columns should be sampled in such a way that the ones with the largest entries are selected with high probability.

If the rank of the input matrix is low, the matrix we build can be much, much smaller, so performing operations on it will take a lot less time. The price to pay is that we obtain approximations, not exact values. That’s often a worthwhile tradeoff. Crucially, this technique only works if the original matrix has low rank, otherwise too much information would be lost in shrinking to the smaller matrix.

Below is an illustration of the method. In this example. we select four rows at random, rescale them, and stack them together to form a new, fat matrix (more columns than rows). From this we select four columns at random and similarly rescale and stack them to form the final smaller matrix.

Schematic illustration of the FKV algorithm for approximate SVDs. We randomly select rows of an input matrix A, rescale them, and stack them together to form a new matrix R. We then sample columns of R, rescale and join them together to form a smaller matrix C, whose singular values approximate those of A.

The novelty of quantum-inspired algorithms are the two remaining steps. In the original FKV algorithm, the approximate singular values and vectors can be used to compute the coefficients and the entire solution vector in linear time, i.e., very quickly. However, for gigantic, truly enormous matrices, linear time is not fast enough. Quantum-inspired algorithms come to the rescue by providing a method to estimate coefficients and sample from the solution vector in sublinear time. This rescue is actually a rescue only for matrices of extremely large dimensions (where linear time is too slow) and which also have low rank.

Coefficient estimation

There are two ways to compute the average of many values. A first method is to add all of them together and divide the result by the total number of values. An alternative approach is to select a few values at random, then compute their average using the first method. Of course, in the second case we get only an approximation of the true average, but in many cases that is good enough and can be done in significantly less time. This is the strategy of quantum-inspired algorithms: the coefficients of the solution vector are inner products between vectors, which can be expressed as the average of a collection of numbers that depend on the entries of the vectors. Instead of directly computing this average (which takes linear time) we select a few of the values at random and compute their average instead. The question is, how many values do we need to sample to get a good approximation? It turns out that the number of samples grows with the precision of the approximation, the rank, and the condition number of the input matrix. In practice, this estimation method is only faster than a direct calculation of the coefficients if (i) precision, rank, and condition number are small, and (ii) the input matrix has a colossal dimension.

Sampling from the solution vector

Here’s a challenge. You are given a large list of numbers. The task is to sample some of them at random, with one condition: the larger the number, the more likely it is to be selected.

How would you do it?

Well, here’s a very simple strategy. First, choose some of the numbers according to whatever rule you like, for example selecting each with equal probability. Second, from the ones you have selected, keep the large ones and discard the small ones. That’s it!

In practice, it pays off to be slightly more sophisticated: we keep each number with likelihood proportional to its value, such that large numbers are kept with high probability and small numbers with low probability. This is the basis of a technique called rejection sampling, which is employed in quantum-inspired algorithms to sample from the solution vectors. Once the approximate singular value decomposition has been obtained and the coefficients have been estimated, it is possible to query any entry of the solution vector in sublinear time. We achieve the final goal of the algorithm — to sample from the solution vector — by selecting these entries at random and using the rejection sampling technique described above to sample large vector entries with high probability.

This step of quantum-inspired algorithms scales better than coefficient estimation, so the conditions are less stringent for it to be worthwhile. However, the price is high: we typically prefer to have a full description of the entire vector instead of a sample of its large entries. As before, using sampling instead of a direct calculation only becomes worthwhile if the input matrix has an extremely large dimension.

Practical performance

In our technical paper, we implemented and tested quantum-inspired algorithms on a variety of problems. We found that they can give good answers for matrices of dimension 2^50 and rank 3; remarkable! We also observed the decay in performance as the rank and condition number is increased, and obtained reasonably low errors when applying them to portfolio optimization and movie recommendation problems. Overall, these tests support the theoretical analysis: quantum-inspired algorithms can perform well, but only provided that stringent conditions are met: low rank, low condition number, and extremely large dimension of the input matrix. These conditions seldom occur simultaneously in practice.

It is still insightful (and fun) to implement and test the algorithms. This is why we decided to make our code publicly available for anyone to use. You can find it here, on Xanadu’s github page.

We designed the code with simplicity in mind. All you have to do is to specify the inputs and algorithm parameters. The code will then return the requested number of samples from the solution vector. Here’s an example program:

Go ahead and give it a try!

Thanks for reading until the end!

--

--

Xanadu
XanaduAI

Building quantum computers that are useful and available to people everywhere.