Unravelling quantum matter more efficiently with quantum computing

Quantinuum researchers put forward a more efficient quantum algorithm to study quantum systems at a finite temperature — with applications in machine learning, optimization, and material simulation.

Mattia Fiorentini
Cambridge Quantum
6 min readAug 15, 2022

--

Photo by Brianna Tucker on Unsplash

By Mattia Fiorentini and Luuk Coopmans

Teaser

Quantum computers are expected to be naturally better than classical calculators at studying the quantum matter. However, when a quantum system is coupled to the environment at a finite temperature, things get complicated. It all boils down to how to handle Gibbs states and compute their properties efficiently. Scientific preprint: https://arxiv.org/abs/2206.05302

Introduction: why Gibbs states matter?

Gibbs states describe quantum systems in thermal equilibrium with the environment.

Estimating the properties of a Gibbs state accurately and efficiently is not only important for quantum science but also for many applied computational problems: in industrial optimization, it is a pivotal step to achieving a quantum speedup in semi-definite programs; for generating high-quality synthetic data, it enables the scalable training of quantum Boltzmann machines; and for crafting high-performance materials, it provides new tools to study elusive but critical quantum effects at finite temperature — see for example the case of high-temperature superconductivity.

Preparing Gibbs states with current quantum computers is a daunting task. This is due to the double-sided nature of such states, both classical and quantum: on the one hand, quantum computers naturally encode the exponentially growing space of pure quantum states, which classical computers cannot do; on the other hand, Gibbs states are a classical mixture of pure quantum states due to the effects of the temperature and the environment — so they are not pure states themselves.

Preparing mixed states requires additional qubits –called ancillae– which, roughly speaking, are needed to account for the effect of the environment on the quantum system. This qubit overhead can be substantial, and known methods (e.g. purification) may result in a doubling of the qubits, compared to the qubits needed by the same quantum system if it were in a pure state, that is, at zero temperature.

While the additional qubit requirements may not matter that much for future generations of quantum computers, qubits are a precious resource today: current quantum computers could help to perform useful calculations with Gibbs states, but is there a better way of utilizing the scarce resources that we have available right now?

Avoiding overcomplications with pure thermal shadows

In recent work, Quantinuum’s quantum machine learning researchers introduced an efficient representation of the Gibbs state that we call pure thermal shadows: it allows a quantum computer to obtain many Gibbs expectation values with fewer measurements.

Pure thermal shadows avoid the explicit preparation of the Gibbs state, and consequently, hardware requirements are significantly reduced. As a point of comparison, for example, the purification method requires twice the number of qubits.

We prove that, for large system sizes, the classical shadows of a pure thermal state have the same expectation values as the Gibbs state: the pure state saves us qubits, and the classical shadows reduce the measurement shots.

This new algorithm combines quantum signal processing with classical shadow tomography and random state preparation: this leads conveniently to a further reduction of hardware requirements such as the depth of some circuits and the number of shots compared to existing literature.

An algorithm that can work on upcoming quantum computers

The algorithm can be summarized as follows:

Step 1

Step 1: a random unitary is applied to our n-qubit system, preparing a random pure state. The unitary is a shallow 2-design.

The first step is the preparation of a random pure quantum state. We show that a 2-design with polynomial depth is sufficient for our purposes. This improves on previous proposals that required exponential depth.

Step 2

Step 2: the random pure state is imaginary time-evolved by the system Hamiltonian into a thermal pure quantum State with quantum signal processing

The second step is the preparation of the thermal pure quantum (TPQ) state: it is obtained from the imaginary time evolution of the random pure state by the system Hamiltonian. Using quantum signal processing, we get a suitable quantum circuit implementation.

Step 3

Step 3: a pure thermal shadow is constructed from a thermal pure quantum state by the application of a random Clifford circuit and measurement in the computational basis.

In the last step, we construct classical shadows of the TPQ state from outcomes of randomized measurements — we call them pure thermal shadows (PTS). This can be implemented with a shallow Clifford circuit, V, followed by a measurement in the computational basis and some classically efficient post-processing steps. Crucially, the PTS become equal to the shadows of the true Gibbs state as the system size increases.

The success of this algorithm is guaranteed by the mathematical proof of equivalence between the expectation values of Gibbs states and thermal pure quantum states, up to an error that falls off exponentially with system size. It is provided in the following theorem, which shows that only the order of Log(M) PTS are needed to predict M linear properties of the Gibbs state:

This is the main theoretical contribution of the paper.

Implementation and results with quantum signal processing

Resorting to quantum signal processing to prepare the thermal pure state gives us a way forward towards implementing the algorithm in future gate-based quantum hardware — which will be less noisy and capable of executing deeper circuits.

For the time being, we can verify that our algorithm works by simulating all the circuits for a couple of relevant use cases.

The max absolute error between the expectation values obtained from the pure thermal shadows and the true expectation (y-axis) is plotted in panel (a), left, as a function of the QSP steps (x-axis) and inverse temperature (colours), and, in panel (b), right, against the number of shadows/number of shots (x-axis), for different state preparation methods (exact Gibbs state with classical shadows, blue; exact TPQ with PTS, orange; finite depth QSP TPQ with PTS).

Using a state-vectors simulator, first, we validate our framework on the well know Heisenberg-XXZ model. In the figure above, we verify that the PTS yield expectation that values are in excellent agreement with what the shadows of the Gibbs state predict.

Then we consider an exciting use case, the training of a Quantum Boltzmann Machine (QBM): this task is intractable for classical computers and important for industry-relevant applications of quantum machine learning in generative modelling. It also serves as further proof of the applicability of our algorithm to very generic systems described by arbitrary fully-connected quantum Hamiltonians.

The learning curve during QBM training, where the required Gibbs state expectation values are computed with three different methods: exact (blue), TPQs (orange), and PTS (green). The loss measure (main panel), the quantum relative entropy, reduces monotonically during the training process. In virtue of theorem 1, we expect the training curve for the PTS (green) to come closer to the training based on exact Gibbs state expectation values (blue) as the system size increases.

Here (figure above), we show that a fully-connected QBM can be efficiently trained to model a target XXZ Gibbs state: the training is more sample efficient, thanks to the PTS, compared to hybrid variational approaches, which are non-scalable with system size. In fact, thanks to Theorem 1, we expect that bigger system sizes will help convergence to better models.

Output of the QBM trained with our algorithm on the salamander retina dataset. S are the possible configuration of 8 neurons (on/off), where we show the first 21 for relevance. q(S) are the relative frequencies of each state: empirically measured (green), trained with exact gradient (blue), which is intractable for larger systems, and with the PTS (orange), which can scale to large systems (orange).

In addition, here (figure above), we show results where QBM is trained to model a classical salamander retina data set: the learned quantum model generates samples that closely match the empirical data distribution.

We envision it might be possible to work towards the implementation of the new algorithm on early-stage quantum signal processing hardware — which before our work seemed a much distant possibility.

We congratulate Luuk Coopmans, Yuta Kikuchi, Marcello Benedetti, on the work done and Mattia Fiorentini for overseeing this project.

Link to the scientific preprint: https://arxiv.org/abs/2206.05302

--

--

Mattia Fiorentini
Cambridge Quantum

Head of Machine Learning and Quantum Algorithms @ Quantinuum (ex-Cambridge Quantum). Developing quantum solutions to address the most compelling challenges.