Training quantum neural networks with PennyLane, PyTorch, and TensorFlow

Quantum machine learning in the NISQ era and beyond

Xanadu
XanaduAI
7 min readFeb 13, 2019

--

By Nathan Killoran and Josh Izaac

Last year, we wrote about QML 1.0, our vision of the road ahead for quantum machine learning. In the months since then, we’ve been working furiously to bring that vision to fruition. Today, we can finally announce that PennyLane, our general-purpose library for quantum computing and machine learning, now integrates with PyTorch, TensorFlow, Strawberry Fields, Forest (Rigetti) and Qiskit (IBM). PennyLane is the first software in the world to bring together the best of quantum computing with the best of machine learning. Now anyone can build and train hybrid computations across CPUs, GPUs, and QPUs¹!

The links between quantum computing and machine learning go back several years. However, many of the “textbook” QML algorithms require fault-tolerant quantum computers. In the current era of noisy intermediate-scale quantum (NISQ) devices, a different set of algorithms, tools, and strategies has begun to take shape. With this in mind, it’s worthwhile to step back, present the key ideas for machine learning in the NISQ era, and survey the current state of play.

[If you are too excited to wait, feel free to skip ahead for examples of hybrid GPU-QPU computation using PennyLane and PyTorch!]

What we talk about when we talk about Quantum Neural Networks

Taken most literally, ‘quantum neural networks,’ or QNNs refers to quantum circuits or algorithms which closely mimic the structure of classical neurons and neural networks, while extending and generalizing them with powerful quantum properties. We recently proposed a QNN which naturally takes advantage of photonics, and there are a number of other proposals in the literature.

However, deep learning has moved beyond the original “vanilla” neural network, with specialized architectures such as ConvNets, LSTMs, ResNets, and GANs rising to prominence. What links these different models together is not the (perhaps dated) notion of an artificial neuron, but rather a broader idea which we will call differentiable computing.

From this perspective, computation is carried out using continuous functions with known derivatives (or gradients). These functions can have adjustable parameters, and the gradients tell us how to adjust (or train) these parameters so that the functions output some desired values. Modern deep learning software libraries, like TensorFlow or PyTorch, are capable of automatic differentiation, making gradient-based optimization and training of deep networks near-effortless for the user.

“A quantum neural network is any quantum circuit with trainable continuous parameters”.

In the NISQ era, quantum computing is increasingly — and productively — being viewed as a form of differentiable computing. After all, quantum computing is just linear algebra in very high-dimensional vector spaces², with quantum gates having the form of matrix multiplications. The parameters of these gates, associated with the strength of the gate or the length of time it is active, are analogous to the parameters of a deep learning model. With this in mind, we can extend our earlier definition. A ‘quantum neural network’ is any quantum circuit with trainable continuous parameters.

A quantum circuit whose gates have free parameters. These can be trained the same way as a deep neural network.

This viewpoint of quantum computation also goes by a more technical name, variational (quantum) circuits³, in the scientific literature. This is because such circuits were initially proposed for chemistry problems under the name variational quantum eigensolvers, or VQE. However, variational circuits have lately been extended more broadly into full-fledged general-purpose quantum machine learning algorithms.

Unlike the more familiar quantum algorithms due to pioneers like Shor and Grover, the variational circuits paradigm has only been established in the past few years — largely because the NISQ era forced a rethink of quantum algorithms, but also due to the increasing cross-fertilization of quantum computing from machine learning.

QML 0.1: Porting quantum computing to machine learning

The contemporary paradigm of quantum machine learning introduced above, i.e., quantum circuits as differentiable computations, is hugely promising. But how can we put it into practice? The quickest and easiest thing to do is simply to build a (simulated) quantum computer inside an existing machine learning library.

This is the route we followed with our first software offering Strawberry Fields, which features a photonic quantum simulator written entirely using TensorFlow. It was the first quantum simulator to offer all the machine learning goodies that TensorFlow provides, in particular the automatic differentiation and optimization features. QuantumFlow, another toolkit released late last year, pursued the same idea.

With these simulators, users can now design and optimize quantum circuits with minimal effort. The value of these tools is displayed by the speed and volume of new scientific research that they have enabled⁴.

“While the simulator route is useful for quantum machine learning in the short term, it is fundamentally not scalable.”

Unfortunately, this approach is limited. The whole reason to build a quantum computer is because they will be able to do things that a classical computer can’t match. A classical simulator, written in TensorFlow, NumPy, C++, or any other framework, will only be able to simulate small, limited, quantum computations. While the simulator route is useful for quantum machine learning in the short term, it is fundamentally not scalable.

QML 1.0: Porting machine learning to quantum computing

During the NISQ era, new quantum devices are regularly coming online. While imperfect, they are expected to be powerful enough to show quantum advantage or supremacy. With this hardware now available, the question becomes: what should we do with it?

Given the emerging paradigm outlined above, we thought: wouldn’t it be great to train a quantum computer as easily as you would train a deep neural network? And to use the same tools — like PyTorch and TensorFlow — that are so powerful and so popular in deep learning?

“Take all the best features from deep learning software and make them truly native to quantum devices.”

Comparison of some available quantum software libraries.

This is the vision we had in building PennyLane. Instead of simulating a quantum computer within conventional machine learning software, we would take all the best features from deep learning software and make them truly native to quantum devices. From day one, PennyLane has provided two key features which we believe will be crucial for near-term QML⁵:

i) automatic differentiation of quantum circuits; and

ii) a QNode abstraction for building hybrid quantum-classical and multi-device computations.

The initial release of PennyLane leveraged the autograd library to provide these features, via a NumPy-like interface. In the latest release, we complete the vision, providing seamless integration of PennyLane with PyTorch and TensorFlow. You can now place tensor objects from these libraries on real-world quantum hardware, extending the multi-device paradigm to encompass CPUs, GPUs, and now QPUs. Complex multi-stage models can now be built and trained, combining quantum circuits smoothly with deep nets.

At the same time, we’ve further expanded the number of quantum devices accessible via PennyLane. PennyLane now interfaces with Xanadu’s Strawberry Fields, Rigetti’s Forest/Quantum Cloud Service, IBM’s Qiskit, and ProjectQ from ETH Zürich. Big thanks to Keri McKiernan and Carsten Blank for contributing to this effort.

Example: Training a quantum circuit with PennyLane and PyTorch

Time to see everything in action. In the simple code stub below, we connect a single-qubit quantum circuit (running on the Forest device) with PyTorch, which processes the circuit’s output.

Combining quantum computations and classical machine learning with PennyLane and PyTorch.

The cost function will try to match the qubit’s state — the direction it points on the Bloch sphere — to a target value, initially at the south pole. Using PennyLane’s automatic differentiation features and the built-in PyTorch optimizers, we can adjust the circuit’s parameters until the qubit matches the target.

As an added challenge, every 100 iterations, the target switches poles, and the qubit state has to adjust in real-time. Here’s what the whole process looks like in action:

Training a qubit state (coloured arrow) to match a target point (red x).

This is just the tip of the iceberg for combining quantum computing with machine learning. Try it out yourself! If you create something cool with PennyLane, be sure to enter it in our software competition, for a chance to win $1000!

Footnotes:

[1] And coming soon, support for TPUs.

[2] Exactly which vector space depends on the type of quantum computer. For example, a single qubit lives on a vector space of dimension 2. On the other hand, photonic quantum computers work on L², the infinite-dimensional space of square-integrable functions.

[3] Sometimes called parametric (quantum) circuits.

[4] See, for example, the following works: [a], [b], [c], [d], [e], [f], [g].

[5] The key trick is to use the quantum device itself to evaluate gradients of quantum circuits. For more technical details how PennyLane accomplishes this, check out the documentation and these papers: [h], [i].

Attribution: “TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.”

--

--

Xanadu
XanaduAI

Building quantum computers that are useful and available to people everywhere.