How to train your QGAN

Quantum machine learning with PennyLane

Published in
6 min readNov 20, 2018

--

By Nathan Killoran, Josh Izaac, and Christian Gogolin

Machine learning is frequently presented as one of the ‘killer apps’ of quantum computers. Unlike conventional digital computers, which employ classical bits, quantum computers manipulate physical systems at their most fundamental level, opening up a much richer structure for computation. Quantum computers natively process quantum information, which corresponds to vectors in very high-dimensional vector spaces.

Intuitively, this makes quantum computers very well suited to machine learning tasks. Indeed, sufficiently large fault-tolerant quantum computers will eventually give computational speedups for the basic linear algebra calculations prevalent in many machine learning algorithms.

The first generation of quantum hardware is now here. It is also accessible — often for free — over the cloud. This emerging quantum hardware has inspired quantum computing experts to rethink their previous mindsets. Instead of designing algorithms for perfect large-scale devices, we have begun to explore the awesome things that can already be done with the devices we have.

This has led to a flurry of new algorithms: quantum approximate optimization algorithms (QAOA), variational quantum eigensolvers (VQE), quantum classifiers, quantum generative adversarial networks (QGANs), quantum neural networks (QNN), and — more generally — the notion of hybrid quantum-classical models. These cutting-edge ideas are exciting, yet their potential is still largely unexplored. Quantum machine learning and quantum optimization have become very hot research areas lately, with a whirlwind of new work developed in the past few years.

Number of research papers released each year containing the term ‘quantum machine learning’ (source: Google Scholar)

At Xanadu, we have one of the best quantum machine learning research teams in the world, regularly contributing to the cutting edge of the field. Yet we recognized early the need to open up this hot area to a broader audience, taking it beyond the current circle of insiders and experts. We do this to accelerate the exploration of new algorithms, to expand the search for new near-term quantum machine learning algorithms, and to establish best-practices for building quantum and hybrid models.

Imagine the ideas that will emerge when anyone can train quantum computers as easily as they would train a neural network. With this in mind, we created a dedicated software library for quantum machine learning — PennyLane.

Enter PennyLane

In designing PennyLane, we took the ideas we like best from classical machine learning and ported them to work natively on quantum computers.

For instance, there has been a huge growth over the past half decade in the field of deep learning, the subfield of machine learning which deals with neural networks with many layers. One of the key drivers of this expansion is dedicated software libraries like Theano, TensorFlow, Pytorch, and more. The ability to rapidly implement and train models with these high-level libraries has given the field a huge boost, as heuristic methods that work well in practice are often discovered through trial and error.

These libraries have two main features in common:

(i) the ability to compute on special-purpose hardware (GPUs, TPUs); and

(ii) automatic differentiation, commonly implemented using the famous backpropagation algorithm.

For PennyLane, the special-purpose hardware is — obviously — quantum computing devices. However, there was no automatic differentiation software for quantum computations before PennyLane.

Backpropagating through PennyLane

Without getting too technical, automatic differentiation is the ability for software to automatically compute derivatives (or gradients) of computer code with respect to free parameters.

When you hear about “learning” or “training” in deep learning, the key ingredient is automatic differentiation. The derivative of a function tells us how that function changes if we adjust its parameter a tiny bit. With access to this derivative information, we can progressively modify and optimize a machine learning model to suit our needs. Commonly, automatic differentiation is implemented using the backpropagation algorithm, which builds up the gradient using the chain rule from calculus, looking piece-by-piece at all subroutines of the overall computation.

There are two paths for applying automatic differentiation techniques to quantum computations. The first is to simulate a quantum computation using a classical machine learning library. Earlier this year, we released Strawberry Fields, which does exactly that via its TensorFlow simulator backend. This strategy however can never give a quantum advantage, since it is inherently limited by the inefficiency of simulating quantum physics with classical computers.

The second strategy is to build a version of automatic differentiation that is naturally compatible with quantum hardware, and which will continue to work and become increasingly useful as quantum computers get progressively more advanced. But how can we compute gradients of quantum computing circuits when even simulating their output is classically intractable? The key insight is to use the same quantum hardware for both evaluating the quantum circuit and for computing its gradients.

Every gate in a quantum circuit carries out some transformation. It turns out that in most cases of interest, we can reuse the same gate, with only small modifications, to evaluate the derivative with respect to the parameter of this transformation. This per-gate derivative information can then be fed into the backpropagation algorithm. The backpropagation algorithm is still implemented on a classical computer, and it cannot see the inner workings of a quantum circuit (i.e., the intermediate quantum states of the circuit).

However, since the quantum devices can evaluate their own gradients efficiently, backpropagation never needs to penetrate the quantum circuit. PennyLane makes use of Python’s “autograd” library to perform automatic differentiation, providing the key additions that allow quantum computations to be differentiated. This means that PennyLane has full automatic differentiation support for classical, quantum, and hybrid computations.

Example sketch of the kind of hybrid classical-quantum model that is possible with PennyLane

Taking a stroll down PennyLane

With PennyLane’s easy-to-use interface (inspired by NumPy and TensorFlow), you can code up quantum computational circuits and bind them together with classical processing steps to build a full-blown hybrid computation.

PennyLane also includes a suite of gradient-descent based optimizers, which means that variational quantum circuits can be optimized in the same way as deep neural networks.

What if you want to move code you tested on a simulator to real quantum hardware, or even compare different hardware offerings? That’s also easy; PennyLane is fully hardware-agnostic. With a one-line change, you can ‘hot-swap’ the device running the quantum portion of your computational model, or even construct a model that utilizes multiple different quantum devices together.

Our aim is to support the entire growing ecosystem of near-term quantum hardware. For that PennyLane provides a plugin API, with which new quantum devices can easily be made QML-ready. Already, PennyLane has plugins for our home-grown Strawberry Fields quantum photonics API, the open-source ProjectQ framework for quantum computing, and hardware support for IBMQ. Coming soon: support for even more quantum hardware, including PennyLane plugins for Rigetti’s PyQuil and IBM’s Qiskit.

For more details on PennyLane, check out our documentation, and explore the various tutorials available. You can also check out the source code at our GitHub repository, and join the discussion at the PennyLane forums. We welcome code contributions — all users who contribute significantly to the code will be offered the opportunity to be listed as an author on the PennyLane whitepaper.

We are incredibly excited to share PennyLane with you. Join us in developing the next cutting-edge quantum machine learning algorithms!

--

--

Building quantum computers that are useful and available to people everywhere.