Artificial Intelligence and Machine Learning
What comes to mind when you hear the words Artificial Intelligence (AI)? Not too long ago, this phrase was reserved for talking about an imagined distant future where humans had robot servants and self-driving cars. Sound familiar? This is the world we live in today. We have personal assistants like Siri to answer any of our questions, Tesla’s that can get us from point A to B while we sleep, and endless filters on Snapchat that can transform our appearance instantly. The age of AI is here.
Machine Learning (ML) is a subset of AI that leverages algorithms to teach computers to make decisions like humans. One particular ML algorithm that aims to mimic the human brain is Artificial Neural Networks. These Neural Networks model the way our brain works by taking information, processing it through a sequence of artificial neurons, and producing an output (Figure 1). Neural networks power a large amount of today’s most prominent AI technologies.
Introduction to Quantum Computing
Quantum computing is another innovation that has the potential to take AI to the next level. Quantum computers use the properties of quantum mechanics to process information. A traditional computer encodes information in bits, which can take a value of 0 or 1. In contrast, quantum computers encode information in qubits. Like a bit, a qubit can take on values of 0 or 1. However, a qubit is able to take on multiple states at the same time, a quantum concept called superposition. Therefore, two qubits can take on any of 4 possible states: 01, 11, 10, or 00. In general, n qubits can represent 2^n different states. This (very simplified) concept of superposition enables quantum computers to be much more powerful than traditional computers. They can represent much more information with much less computing power.
A major roadblock for Neural Networks is the time it takes to train them to make decisions. It is not uncommon to spend weeks, even months, training a neural network due to lack of computing power. What if there was a way to harness the power of quantum computing to accelerate the training process, making these complex networks feasible? Enter Quantum Machine Learning.
Quantum ML is exactly as it sounds — the intersection of ML and quantum computing. Quantum ML aims to leverage the power of quantum computers to process information at speeds significantly faster than traditional computers. However, it is not as simple as transferring existing code from a CPU to a quantum processor. The code needs to be able to speak the quantum language of qubits first. Much of today’s work on quantum ML attempts to solve that exact problem.
Quantum Neural Networks (QNNs)
Functioning neural networks were a huge step forward for AI. However, existing neural networks are not yet able to harness the power of quantum computers. The first step towards creating a working QNN is modelling an individual quantum neuron.
Let’s examine the way quantum neurons are represented, and how QNNs compare to traditional neural networks. Since there are differing interpretations of quantum mechanics, there are different ways to represent a quantum neuron. One such interpretation is Huge Everett’s Many-worlds Interpretation. In short, this theory states that there are many parallel universes, each playing out every possible history and future at once.
It sounds highly complicated and abstract because it is. The Many-worlds Interpretation provides insight into how a QNN should behave. Just like how a traditional neural network mimics the human brain, a QNN can mimic quantum physics. Researchers at Penn State University use this interpretation to develop a methodology for building QNNs.
Traditional neural networks use a single network to store many patterns. What if QNNs used many networks to store many patterns, just like how there may be many universes that contain many realities? Quantum superposition could make this possible. Remember, superposition means that it is possible for a qubit to be in multiple states at once. Extending this analogy to neural networks, in theory, a QNN would be able to store all possible patterns in superposition at once. Each pattern in the network thus represents its own parallel universe.
This is one of many theorized frameworks for QNNs. If you are interested in the nuanced details of this theory, please check out the full paper. The actual implementation of a QNN that represents multiple parallel universes is not yet feasible. However, it is possible to model a single quantum neuron.
Per MIT Technology Review, a research team at the University of Pavia in Italy implemented the world’s first single layer neural network on a quantum computer in 2018 (Figure 2).
In a classical neural network with a single neuron (a), the output is a weighted sum of the input vector passed through an activation function to map it to a binary output. At an abstract level, the QNN functions in the same way, but the implementation is different on a quantum processor. The first layer of the quantum network encodes the input vector into quantum states. The second layer then performs unitary transformations on the input, similar to how the weight vector functions in the classical neural network. You can think of unitary transformations as the computer translating from bits to qubits. Finally, the output is written on an Ancilla qubit, which produces the final output.
The implementation of the unitary transformations on a quantum processor is complex (Figure 2). At a high level, the input passes through a series of gates that are a part of a quantum circuit. These gates, denoted Z, H⊗N, and X⊗N, mimic the weight vectors in the traditional neural networks.
This model is able to accurately mimic the behavior of a single neuron. However, it is not yet scaled to a deep neural network that consists of many layers of multiple neurons. A single layer model like this is able to identify simple patterns but is not yet scalable. This is a first step in efficiently training QNNs on quantum hardware, and a step towards realizing the Many-worlds Interpretation of neural networks.
Benefits of QNNs
QNNs seem extremely complicated and borderline incomprehensible. But there is a good reason why they are being explored. Per the Penn State research team, QNNs offer many advantages compared to traditional neural networks, including:
- exponential memory capacity
- higher performance for a lower number of hidden neurons
- faster learning
- processing speed (1010 bits/s)
- small scale (1011 neurons/mm3 )
- higher stability and reliability
These benefits solve most, if not all, of the limitations of traditional neural networks. This also means there is an extremely high incentive to be a first mover in the quantum ML space to exploit these advantages. Currently, there are many efforts being made to implement a fully functional QNN.
Current & Future Work
The Quantum AI team at Google is one of the forerunners of quantum ML. The team constructed a theoretical model of a deep neural network that could be trained on a quantum computer. While they lacked the current hardware to actually implement the model, their results were encouraging. The framework they created will allow for quick adoption of quantum ML once the hardware becomes available.
Additionally, the Google AI team examined how neural network training will work on a quantum processor. A traditional approach to network training the weights randomly initializes the weights prior to training. However, they found that this approach does not work well when transferred to the quantum space. As a result, problems such as vanishing gradients will arise when training quantum models. With their research, the Google AI team is laying the groundwork for the future of quantum ML.
Focusing on getting traditional neural networks to train at quantum speed is a natural starting point and unquestionably highly important work. But the beauty of quantum computing will be the ability to solve quantum problems. These types of problems are far too complex for traditional computers to effectively model, let alone for human brains to understand. So what can quantum ML do that we can’t do currently?
The answer to that question is not as satisfying as one would hope. Given that the frontier to quantum ML has only recently been explored, it is premature to expect any significant quantum problems to be solved. A cool area of exploration is using QNN to discover new musical instruments. In short, this neural network can generate instruments that play entirely new sounds. For example, here is a clip of one of their quantum instruments playing the infamous “Hey Jude” by The Beatles:
Another exciting application of QNN is modelling black holes and the human brain. The two share a common feature in their incredible capacity for memory storage. In theory, the additional capacity of a quantum computer compared to a traditional computer may be able to capture this feature. The QNN that was created was able to store and retrieve an exponentially large number of patterns. While it is hard to make any concrete conclusions, these results are encouraging a suggest future applications within quantum physics.
At this point in time, QNNs are more in their fetal stage than adolescence. The research being done by Google AI and others is helping to pave the foundation for future work in the quantum ML space. The future applications for QNNs are uncertain, and perhaps too abstract for most to comprehend. But what is certain is that ubiquitous quantum ML is on the horizon, and it is easy to get excited about what it will make possible.