A Hardware-Aware Approach to Improving Quantum State Tomography

Qiskit
Qiskit
Published in
5 min readFeb 15, 2022

By Sanjaya Lohani¹²*, Joseph M. Lukens³*, Daniel E. Jones⁴, Thomas A. Searles⁵, Ryan T. Glasser², and Brian T. Kirby²⁴

Perhaps the first thing we’re told when learning quantum mechanics is that you can’t directly observe quantum states. Once you attempt to measure quantum information stored in qubits, they always return to classical information — strings of zeroes and ones. How do we ever determine or confirm what state a quantum system produces if we can’t directly observe it? We use a technique called quantum state tomography.

Quantum state tomography (QST) attempts to recreate the whole picture using lots of smaller pieces. We can perform measurements on many identical quantum states to generate a probability distribution describing each outcome, and then we can reconstruct the original quantum state using this distribution.

QST is crucial for understanding how well today’s quantum computers function. We can generate a state on quantum hardware and then perform QST to see how close the output of the quantum device is to what we expected. However, QST takes a lot of computational work. Not only do we have to perform all of the required measurements, but we must then reconstruct the system’s state from those measurements. The reconstruction process scales poorly as the size of the quantum system increases. At the same time, attempting to speed this step up can compromise its accuracy. That’s why our team set out to design a quicker way to carry out QST, and we think that our method will be valuable to anyone trying to characterize IBM Quantum hardware.

There’s no getting around the fact that QST requires lots of measurements, but we saw the opportunity to make this reconstruction step more efficient. We considered two kinds of approaches for tackling this step in our paper: quicker but potentially less accurate approaches rooted in machine learning, and slower but more accurate Bayesian approaches. Machine learning approaches front-load the most expensive computation: You train a model on many combinations of measurement distributions and the quantum states that created them. Then, after training, the model predicts a density matrix, that is, a kind of mathematical object that describes the quantum state, for any set of inputted measurement distributions. Meanwhile, Bayesian approaches begin with a distribution of quantum states and their measurement outcomes called the prior distribution, then iterate to move the state predicted by the prior distribution closer to the actual state.

If we want to speed up the process of reconstructing the quantum state, maybe our best bet is to ensure that we’re starting with a better distribution of quantum states and their measurements. In other words, there may be some value in improving the data set that we use to create our machine learning training set or the prior distribution we use to start our Bayesian method — a dataset more appropriate for the hardware we’re characterizing. Imagine trying to carve a sculpture out of wood; you might try out many different types of wood before you find one with the right softness and durability to create a successful sculpture. This trial-and-error process is like performing your reconstruction without an appropriate distribution to start with. However, if someone tells you where to start — letting you know ahead of time that a certain kind of wood makes for the best carving — you’ll make your masterpiece quicker. This is like beginning with a better distribution.

Some of the most popular distributions for QST are based on the Hilbert–Schmidt (HS) and Bures measures. However, the distributions that these measures create don’t mimic the properties of IBM Quantum processors. In part, that’s because these two measures generate lots of mixed states — i.e., states that can be written as a mixture of other quantum states. When executing low-depth circuits, IBM Quantum processors primarily generate nearly pure states, which can’t be decomposed into a mixture of other states. Recently, physicists The Tien Mai and Pierre Alquier introduced a new kind of distribution that we call the Mai-Alquier (MA) distribution, which can, depending on input parameters, generate a higher percentage of pure or nearly pure states. This distribution is easier to construct than the others and gives us more control over the kinds of random states it generates. For example, if we can estimate the percentage of pure states generated by a piece of IBM Quantum hardware, we can use this to concentrate the states generated by the MA distribution around that purity.

The purity of states in the MA-generated distribution compared to those generated by HS and Bures. The left two plots show states compare the three distributions to those from a four-qubit circuit with no gates. The right two plots compare them to a distribution of states created by an arbitrary four-qubit circuit.

We put the MA distribution to the test against distributions found using the HS and Bures measures. We used these three distributions to perform both the machine learning and Bayesian approaches to quantum state tomography. For the machine learning approach, we found that using MA tuned to the average purity of the device returned higher-accuracy results than using the other two distributions, which were not well-tuned to the purity of the device. Meanwhile, for the Bayesian approach, using MA gets you closer to the correct answer, faster. You can see more details in our paper, linked here, and in some of the plots above.

Quantum is an emerging field, and QST has plenty of room for improvement and change. It’s entirely possible that we will figure out how to get even better at tuning the MA distribution to our problem, allowing us to arrive at a result faster or with higher accuracy. Or perhaps other custom distributions will make the task even easier.

We’ve put together a Jupyter notebook for you to try out machine-learning-based QST on your own using the MA distribution to generate your training set. We hope you’ll try it out and help us to push this field forward.

This research was completed as part of the IBM-HBCU Quantum Center and the C2QA | Co-design Center for Quantum Advantage

Author affiliations:

  1. IBM-HBCU Quantum Center, Howard University, Washington, D.C. 20059, USA
  2. Tulane University, New Orleans, Louisiana 70118, USA
  3. Quantum Information Science Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
  4. United States Army Research Laboratory, Adelphi, Maryland 20783, USA
  5. University of Illinois Chicago, Chicago, Illinois 60607, USA

*These authors contributed equally to this work.

--

--

Qiskit
Qiskit

An open source quantum computing framework for writing quantum experiments and applications