Making a Neural Network, Quantum

Hello world, we are in Xanadu.

Tom Bromley
XanaduAI
7 min readFeb 20, 2018

--

We work to manufacture the world’s first all on-chip photonic quantum processor, using cutting edge techniques to harness powerful properties of light. The purpose of this blog is to keep you updated with our progress. From exciting new findings to testing challenges, and everything in between, we will keep you in tune with the latest in the world of quantum tech.

Quantum machine learning is one of the primary focuses at Xanadu. Our machine learning team is strengthening the connections between artificial intelligence and quantum technology. In this blog post we discuss how a neural network can be made quantum, potentially giving huge increases in operating speed and network capacity. This post will require no prior scientific or mathematical background, even if you’ve never heard of a neural network — read on! For more details, a paper explaining these findings is available here.

Neural networks

You have probably benefited from machine learning today. And yesterday. As well as the day before. Machine learning is becoming increasingly embedded in our daily routine. If you have checked a social media account, performed an online search, or even commuted to work, a distant remote server may have shaped your experience using a wide range of learning algorithms. The objective of machine learning is to give computers the power to make predictions and generalizations from data without explicitly telling them how to do so. It is an extremely exciting and fast evolving area; take a look at a beginner’s introduction.

A very successful approach in machine learning is to design an artificial neural network, which is inspired by the structure of neurons in the brain. Imagine a collection of points, that can each be in one of two states: “on” or “off”. These points are interconnected with wires of variable strength, as shown in the diagram below. The network is operated by allowing each neuron to decide its state based upon the states of the neurons connected to it, also bearing in mind the strength of the connections. One of the advantages of neural networks is the ability to choose the structure based upon the problem; see here to appreciate the neural network zoo! Neural networks have been used for a variety of applications, including voice recognition and cancer detection.

Quantum neural networks

So where can quantum technology help? At Xanadu, we have been looking at how to embed a type of neural network into a quantum system. Our first step is to use a property called quantum coherence, where a system can concurrently exist in a combination of states — in what we call a coherent superposition. The trick is then to associate each neuron with a state of the system: if the neuron is “on” then its corresponding state appears with a positive sign in the superposition, while if the neuron is “off” then there is a negative sign in the superposition. We have focused on systems of multiple quantum bits (qubits), each of which can either be “up” or “down”. By looking at all of the combinations of “up” and “down” possible in our collection of qubits, you can see that an exponential number of neuron configurations can be stored within a small number of qubits. For example, the diagram below shows that we can store any configuration of 4 neurons in only 2 qubits!

By using this way of embedding neurons within qubits, as well as accessing an increased storage capacity we unlock access to a huge number of quantum algorithms that can help us to speed up processing the network. The first question here is to choose the structure of the neural network, so that we can know which quantum algorithm is best suited to give us a performance advantage. This post focuses on the Hopfield network, which is a structure where all of the neurons are connected to each other with variable weights (forming a complete graph). The Hopfield network can be used as a content addressable memory system: configurations of the neurons are associated to patterns (for example, images), which are stored by altering the weights of the connections between neurons. This is known as Hebbian learning. New patterns can then be loaded into the Hopfield network, which is processed with the objective of recovering the most similar pattern stored in memory.

The conventional way of operating the Hopfield network is to keep picking neurons at random and updating them by considering the connected neurons, along with their weights. One of our insights is to realize that the Hopfield network can instead be run in a single step by inverting a matrix containing information on the weights between all neurons. Then, using the embedding into qubits discussed above, we can turn to the famous quantum HHL algorithm to process the Hopfield network. The HHL algorithm can invert a matrix in an exponentially fast time when compared to the best algorithms running on standard computers. However, to exploit the HHL algorithm we need to be able to do something called Hamiltonian simulation of our matrix.

The Hamiltonian of a quantum system governs how it naturally evolves in time. Hamiltonian simulation is therefore the art of making a quantum system evolve in a controlled way so that its evolution is as close as possible to the given Hamiltonian. One of the novel techniques that we have developed is a method of Hamiltonian simulation for the Hopfield network matrix. This is achieved by repetitively “partial swapping” in batches of the memory patterns to be stored. By “partial swapping,” we mean that our qubits are partly swapped with another bank of qubits holding sequences of the memory patterns. This construct can be thought of as the quantum analogue of Hebbian learning (qHeb), and we have released a more detailed paper focused on this topic. A diagram summarizing our quantum approach for the Hopfield network is given below. We call our quantum routine qHop, which uses the quantum subroutine qHeb.

So, how can this help?

Encoding a neural network within qubits gives an exponential advantage in storage capacity, while the algorithms qHop and qHeb team up to give an exponential increase in processing speed. This means that we expect to run larger neural networks faster on a quantum processor than we could do using a standard computer. The Hopfield network itself has an application as a pattern recognition system, as well for solving the travelling salesman problem; read this book for a very clear explanation.

We have highlighted in particular the application of the Hopfield network within genetics as a recognizer of infectious diseases. Imagine that an outbreak of flu has occurred and scientists have partially sequenced the genetic code of the virus. Their goal is to match the genetic sequence with one of the known strains of flu, such as H1N1 or H5N1. By loading the partial sequence into the Hopfield network, which has already stored all the known strains within the neuron connection weightings, the scientists can work out which strain of flu has caused the outbreak. In the image below, we show how the genetic data in terms of the RNA base pairs A, C, G, and U can be stored in neurons of the network. The plot shows a comparison between simulated results of operating the Hopfield network using the conventional approach and our new matrix inversion based approach. Running this algorithm on a quantum processor will also give improvements in storage capacity and operating speed.

What’s next?

We are very excited to uncover improvements to the Hopfield network through quantum mechanics. Yet, there is still more work to be done! The question of how to quickly read in and out data from our quantum device still needs to be addressed.

At the same time, the experimental team here at Xanadu has been working on innovative chip designs and implementations of photonic quantum processors. One of our main objectives is to combine new insights in quantum machine learning with real-world photonic quantum processors. We hope to use the power of laser light within our chip, which can go far beyond the power of even qubits, to give a disruptive impact into machine learning.

Stay posted for more breakthroughs!

Tom@Xanadu HQ

--

--