HHL Algorithm: Solving Systems of Linear Equations

Jaden Luo, Jacob McCarran, Angelina Zhang

History

Quantum computing is quickly becoming one of the most interesting applications of the intersections between mathematics, physics, and computer science. The interest in quantum computing is due to quantum computers potentially solving problems faster than our best current-day classical algorithms, and in some cases even solving problems that classical computers have never been able to solve efficiently.

Many quantum computing breakthroughs have happened here at MIT. The famous Shor factoring algorithm was developed by MIT Professor Peter Shor in 1994 and today MIT researchers are constantly pushing the limits of quantum computing. One such MIT breakthrough which we will be discussing today comes from Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm these researchers developed is called the HHL algorithm, and it uses quantum mechanics to solve a problem that many students learn how to do in grade school — solving a system of linear equations.

Overview

Solving a system of linear equations, or Ax = b, is a problem that arises often in science and engineering. In today’s world, the data sets that makeup systems of equations are gargantuan — oftentimes, terabytes of data need to be processed in order to find a solution. To solve n linear equations with n unknowns on a classical computer, it takes time on the order of at least O(n). Also, another distinction that needs to be made is that when solving systems of linear equations, sometimes you do not need to know all n-components of the solution vector x. Rather, you might be interested in computing some function of that solution. This is where the HHL algorithm comes in. Using a quantum computer, the HHL algorithm can approximate a function of the solution vector x in logarithmic time with respect to n. Here we see that the time dependence on n improves exponentially when using a quantum computer.

This exponential speedup has drastic implications for science and engineering. Solving systems of linear equations arise when discretizing partial differential equations and when solving the least squares problem in machine learning. However, this promising algorithm has a few constraints that I will now mention. First, our matrix A must have a low condition number, where the condition number refers to how sensitive the matrix is to perturbations or roundoff errors. Second, the matrix must be s-sparse, meaning that there can be at most s non-zero entries per column in the matrix. Additionally, matrix A needs to be Hermitian, so if A is not hermitian, we will need to convert it to a Hermitian matrix. Lastly, as previously mentioned, the HHL algorithm does not output the solution vector x, as outputting all n components would take O(n) time. However, the HHL algorithm will approximate the function of the solution vector in the form ⟨x|M|x⟩, where M is a measurement matrix.

Finally, before we run through the algorithm’s steps we will show the HHL algorithm’s high-level circuit. As you can see, we have 3 quantum registers all initialized to be in the |0⟩ state, and we also have an auxiliary qubit whose purpose will be explained a little later. The nl register will be used to store the eigenvalues of A (in binary), and the nb register will contain the vector solution. With that, let’s move on to the HHL algorithm’s steps.

Steps

Preparation

  1. Represent b as a quantum state (|b⟩). This can be done by:

a. Using a linear operator B to efficiently prepare |b⟩ from an arbitrary initial quantum state, or

b. Inserting the HHL algorithm into a larger algorithm where |b⟩ is simply given as an input.

2. Use Hamiltonian simulation to transform the Hermitian matrix A into a unitary operator (U).

(|uⱼ⟩: the jth eigenvector of A, λⱼ: the jth eigenvalue of A)

3. Load the quantum state |b⟩ to the n_b register.

Subroutine

4. Use Quantum Phase Estimation (QPE) to decompose |b⟩ in the eigenbasis of A. This will yield the state:

(|λⱼ⟩: λⱼ represented in binary in the nₗ register)

5. Add an auxiliary qubit and rotate conditioned on |λⱼ⟩ to give:

(C = normalization constant)

6. Undo the Quantum Phase Estimation (using QPE✝) to uncompute |λⱼ⟩ back to |0⟩.

7. Measure the auxiliary qubit.

a. If 1 is measured, we are left with the resulting state that corresponds to the solution |x⟩, up to normalization:

b. If 1 is not measured, repeat Steps 4 through 7.

Measurement

8. With the state of the system proportional to |x⟩, perform the desired measurement with operator M to estimate ⟨x|M|x⟩.

Example

We will now run the HHL algorithm on a quantum simulator with the input matrices:

We apply the Qiskit HHL solver to these inputs, and can visualize the resulting quantum circuit below:

The squared norm is the probability of measuring 1 in the auxiliary qubit (algorithm’s stopping condition):

To obtain the full solution vector, we first extract the vector components (auxiliary qubit, work qubits). To account for computer inaccuracy, we disregard the imaginary part. Finally, we normalize the vector and multiply by the Euclidean norm to obtain the solution

The classical method, using NumPy’s LinearSolver, yields a slightly different result due to estimation errors in phase estimation, which will be discussed in the following section.

Error Analysis & Runtime

As mentioned above, we use Quantum Phase Estimation to simulate e^{iAt}. While estimating λ, we obtain an error of O(1/t). We can bound this error if λ​​≥ 1/𝜅 with a constant ε by setting t = O(𝜅/ε), where 𝜅 is the condition number.

The original HHL algorithm has a runtime of O( s²𝜅²log(N)/ε) for an s-sparse matrix, as we perform O(𝜅) iterations of the QPE subroutine. In 2017, Andrew Childs improved this to O(1/ε) . In contrast, the traditional Gaussian elimination method has a runtime of O(N³). Thus, HHL provides an exponential speedup for matrices with sufficiently sparse matrices with low condition numbers.

Applications

HHL’s speedup of solving linear equations can be applied to a variety of scientific fields. Most prominently, this algorithm is impactful for machine learning and big data. As a classical machine learning algorithm, runtimes typically have a polynomial runtime with regards to the input data, we can use quantum computing and the HHL algorithm to manipulate high-dimension vectors and quickly perform algorithmic operations to train data. Some examples of operations include least-squares fitting, which maps a set of points to a function, and support vector machines, used for binary classification.

Resources Used

--

--