Quantum Computing & Machine Learning

Siddharth Narayanan
Analytics Vidhya
Published in
10 min readJan 8, 2020

--

Heralding a whole new era of computing, quantum computers come with the potential of revolutionizing a number of industries, from pharmaceutical drug discovery to financial modelling to cryptography. Quantum computing is beginning to inspire a new generation of scientists, researchers & engineers to reform information technology and computing as we know it. This article provides an overview of Google’s recent experiment with the Sycamore processor, some of the new research to apply quantum computing to machine learning and finally an example implementation of the QSVM classifier from IBM’s Qiskit.

The power of Qubits

Traditional computers perform calculations by processing “bits” of information, with each bit holding one of two values: a 1 or a 0. A collection of eight bits — known as a byte — can store a single character, like the letter A. A quantum computer, on the other hand, processes bits built by scientists that can exist as both a 1 and a 0 simultaneously, or rather, the state of the qubit oscillates between 1.0000 and 0.0000. More accurately, qubit is a bit with some amplitude for being 0 and some other amplitude for being 1.

This is one of the strengths of qubits: they do not have to be the all-or-nothing 1 or 0 of binary bits but can occupy states in between. This quality of “superposition” allows each qubit to perform more than one calculation at a time, speeding up computation in a manner that seems almost miraculous. Although the final readout from a qubit is a 1 or a 0, the existence of all of those intermediary steps means it can be difficult or impossible for a classical computer to do the same calculation.

Google Sycamore

Google’s engineers recently demonstrated the massive advancements in superconducting-based quantum computing with state-of-the-art gate fidelities on a 53-qubit device. In their published findings from the Sycamore system, the team hailed its ability to solve a particularly difficult problem in 200 seconds, which according to their researchers would take the world’s current fastest classical computer — IBM’s Summit — 10,000 years to solve. The team accomplished this impressive feat by giving Sycamore a very specific problem to solve, called a random circuit sampling problem.

Sycamore processor

Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bit-strings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bit-strings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow. Randomness is an important resource in computer science and random a quantum circuit with certifiable quantum randomness is the gold standard, especially if the numbers can be self-checked (certified) to come from a quantum computer.

Current research with quantum computers & Machine Learning

Quantum machine learning algorithms translate machine learning methods into algorithms based on the building blocks of quantum information processing. Quantum algorithms for k-nearest neighbor and clustering for example are based on amplitude amplification. Quantum kernel methods such as support vector machines and Gaussian processes are based on the technical routines for quantum matrix inversion or density matrix exponentiation.

Current development in Quantum computing can be thought of as a collection of different paradigms on a spectrum of how far they are abstracted away from the underlying physics. This in turn can translate to a steeper learning curve for developers or software engineers. Discrete variable gate-model quantum computing is the most common paradigm where qubits supplant bits and logical transformations are replaced by a finite set of unitary gates that can approximate any arbitrary unitary operation.

Continuous variable gate-model quantum computing is another major paradigm that is closer to the physics way of thinking about quantum mechanics. It shares a significant portion of its terminology with quantum optics.

There are many curated lists of open-source quantum software projects available on GitHub along with thriving developer communities for major projects like Qiskit, Cirq, Strawberry Fields etc. They all provide comprehensive documentation on the type of software & applications the tool was designed for.

Quantum Control through Deep Reinforcement Learning

In their paper on Universal Quantum Control through Deep Reinforcement Learning, Google’s researchers propose that quantum control via deep reinforcement learning (RL) could be used in broader applications such as quantum simulation, quantum chemistry, and quantum milestone tests.

A significant challenge in present day quantum computing is the development of a physical model for a realistic quantum control process, so that error amounts can be reliably predicted. This is important because the amount of quantum information lost during the computation, aka “leakage,” will not only lead to errors that will lose useful quantum information, but also eventually downgrade a quantum computer’s performance. To tackle this Google’s team introduced a quantum control cost function covering leakage errors, control constraints, total run-time, and gate infidelity to ensure leaked information could be accurately evaluated. This enables the reinforcement learning techniques to optimize such soft penalty terms without compromising control over the system.

An efficient optimization tool was developed to harness the use of the new quantum control cost function. They handpicked the trusted-region reinforcement learning — an on-policy deep Reinforcement Learning method — for the task. In quantum systems the control landscape is often high-dimensional and inevitably crowded with a large number of non-global solutions, and on-policy Reinforcement Learning is advantageous in such a case as it can use non-local features in control trajectories. The method showed good performance on all benchmark issues and robustness against sample noise.

Google sees high potential in reinforcement learning techniques using deep neural networks for qubit control optimization. Their abilities to harness non-local regularities of noisy control trajectories and to facilitate transfer learning between tasks have inspired researchers to adopt control methods built on deep reinforcement learning.

Classification

Experiments have also focused on using the quantum computer itself as a discriminator. Training data is mapped into a quantum state, kind of analogous to turning color images into 0s and 1s. In this case, the output is a bunch of quantum particles in superposition. This data are then fed into a short-depth quantum circuit, which can mostly maintain quantum properties until the end of the calculation. Their quantum variational approach uses a (possibly random) starting set of parameters that are then optimized during training. This is somewhat similar to neural networks, which also use parameterized quantities (weights) and start from a random state. Because of their quantum properties, particles that drive a quantum computer abstractly inhabit a really large “quantum state,” full of possibilities. In theory, this makes separating out the features far easier and faster than a traditional computer.

Collecting meaningful data from quantum computing in the cloud, however, is notoriously difficult due to the high levels of experimental noise in the computation. Researchers at IBM and MIT took a big step in the right direction using a two-qubit quantum computing system by showing that it’s possible to classify man-made lab-generated data even in the presence of noise. The goal was to see if a machine learning classifier can be implemented inside quantum hardware. That’s why the data is man-made in a way that it can be classified with 100 percent success to verify the method. Even with the inherent noise from current generation quantum computers, the team was able to achieve near-perfect classification (you can play with the demo here).

While neural networks are immensely popular these days and efforts have been made to bring extremely simple neural nets into quantum computing, the two aren’t a great fit. More fundamental techniques like SVM on the other hand, have been more adaptable. Mathematical theory of kernel methods and quantum mechanics has a lot of similarities, while the theories of quantum theory and neural networks are very unlike — This insight is extremely useful, since quantum computing can help speed up kernel-based classifiers.

A quantum computer can also be used to figure out the best kernel: that is, how to best map all that input data into high-dimensional space to help separate features in a meaningful way. Here, the classic silicon-based computer can use the kernel (of insight) from its quantum companion and learn rules around features for each class.

Example Implementation

We will try implementing the QSVM, a variational quantum algorithm from this paper. The variational method in quantum theory is a classical method for finding low energy states of a quantum system. The rough idea of this method is that one defines a trial wave function (ansatz) as a function of some parameters, and then one finds the values of these parameters that minimize the expectation value of the energy with respect to these parameters.

To obtain the value of the objective function:

  1. Prepare the ansatz state.
  2. Make a measurement which samples from mapping of classical problem onto the quantum problem of minimizing the expectation value of the observable).
  3. Repeat

One always needs to repeat the measurements to obtain an estimate of the expectation value. Quantum computers can provide estimates of the objective function for the ansatz, which in turn can be plugged into an outer loop to try to obtain parameters for the lowest value of the objective function. For these values, one can then use that best ansatz to produce samples of solutions to the problem which obtain a hopefully good approximation for the lowest possible value of the objective function.

SVM

In a classical SVM, we have a set of points that are in either one group or another and we want to find a line that separates these two groups. This line can be linear, but it can also be much more complex, which can be achieved by the use of Kernels.

In the case of a quantum SVM, quantum feature maps are used to translate the classical data into quantum states and build the Kernel of the SVM out of these quantum states. After calculating the Kernel matrix on the quantum computer we can train the Quantum SVM the same way as the classical SVM.

First we need to install the required libraries. I tried several different libraries and found IBM’S Qiskit the easiest to get up and running for machine learning experiments. I have placed the full experiment code in this notebook.

Qiskit quantum circuits can be executed on a local simulator or a real quantum computer using IBMQ. This code does not use IBMQ and is executed on a local QASM Simulator. Qiskit aqua provides a predefined function to train the whole QSVM. Where we only have to provide the feature map, a training and a test set and Qiskit will do all the work for us.

Let’s plot and observe original data set

The success ratio shows how accurate the QSVM predicts the labels. We got a testing success ratio of 0.9 in this experiment. Let’s plot the results. The first result plot shows the label predictions of the QSVM

The second plot shows the test labels.

We can repeat these above steps on any of the other datasets using load_iris(), load_digits() & load_wine() from sklearn.datasets to observe how the results change.

Kindly note, apart from finding the quantum Kernel the QSVM algorithm does only classical optimization. In the end there is no difference to the classical SVM, except that the Kernels are coming from a quantum distribution.

Conclusion

A fault-tolerant quantum computer promises a number of valuable applications and researchers are finding valuable ways to introduce quantum computing into the real-world. With the arrival of quantum processors, collaborators and academic researchers, along with a large number of companies can contribute to the development of algorithms.

Notable research on the application of quantum computers includes the design of new materials — lightweight batteries for cars and airplanes, new catalysts that can produce fertilizer more efficiently (a process that today produces over 2% of the world’s carbon emissions), and more effective medicines. Further, creative researchers are also important resources for innovation and can test the limits and applications of the new computational resources made available. For quantum to positively impact society in the near future, the quantum community must focus on building and making accessible, powerful programmable quantum computing systems that can implement and reliably reproduce a broad array of quantum demonstrations, algorithms and programs.

References

  1. https://youtu.be/1lIfbqfoGMo
  2. On “Quantum Supremacy” | IBM Research Blog
  3. What is “quantum supremacy” and why is Google’s breakthrough such a big deal? — Vox
  4. Hands-On with Google’s Quantum Computer — Scientific American
  5. Google AI Blog: Quantum Supremacy Using a Programmable Superconducting Processor
  6. Google Accelerates Quantum Computation with Classical Machine Learning
  7. Quantum Computing Explained (in Mere Minutes!) — The New York Times
  8. Universal quantum control through deep reinforcement learning
  9. A quantum machine learning algorithm based on generative models | Science Advances
  10. IBM Q for AI — IBM Research
  11. IBM casts doubt on Google’s claims of quantum supremacy | Science | AAAS
  12. Quantum Information and AI — Towards Data Science
  13. Opinion | Why Google’s Quantum Supremacy Milestone Matters — The New York Times
  14. Beyond Weird: Decoherence, Quantum Weirdness, and Schrödinger’s Cat — The Atlantic
  15. Finally, Proof That Quantum Computing Can Boost Machine Learning
  16. Revolt! Scientists Say They’re Sick of Quantum Computing’s Hype | WIRED
  17. Quantum supremacy is coming. It won’t change the world | Technology | The Guardian
  18. Quantum Machine Learning Toolbox — Quantum Machine Learning Toolbox 0.7.1 documentation
  19. Open source software in quantum computing
  20. Exploring Quantum Programming from “Hello World” to “Hello Quantum World” — By
  21. 1804.11326.pdf

--

--