Using quantum machine learning to analyze data in infinite-dimensional spaces

Maria Schuld
XanaduAI
Published in
4 min readMar 28, 2018

The latest Xanadu research paper proposes a novel perspective on quantum machine learning that sounds crazy at first sight. The core idea is to use the Hilbert space of a quantum system to analyze data. The Hilbert space is the place where the states that describe a quantum systems live, and it is a very large place indeed. For a 50-qubit quantum computer, we are talking about a 1,125,899,907,000,000-dimensional space, and for a single mode of a continuous-variable quantum computer, the Hilbert space has an infinite number of dimensions. So how can we analyze data in such a Hilbert space if we have no chance to ever visit it, let alone to perform computations in it?

Kernel methods implicitly embed data into a higher dimensional feature space, where we can hope that it gets easier to analyze.

In fact, machine learning practitioners have been doing this kind of thing for decades when using the beautiful mathematical theory of kernel methods [1]. Kernels are functions that compute a distance measure between two data points, for example between two images or text documents. We can build machine learning models from kernels, the most famous being the support vector machine or Gaussian processes. It turns out that every kernel is related to a large — and sometimes infinite-dimensional — feature space. Computing the distance measure of two data points is equivalent to embedding these data points into the feature space and computing the inner product of the embedded vectors. In a sense, this is the opposite to neural networks, where we compress the data to extract a few features. Here, we effectively ‘blow up’ the data to make it potentially easier to analyze.

Mapping inputs into a large space and computing inner products is something that quantum computers can do rather easily. And any device that can encode a data point into a quantum state (which is really almost any quantum device), and which can estimate the overlap of two quantum states, can compute a kernel. Kernel methods are therefore a strikingly elegant approach to quantum machine learning. What is more, if the data encoding strategy is complex enough, we might even find cases where no classical computer could ever compute that same kernel. If we can show that our “quantum kernel” is useful for learning, we have a recipe for a quantum-assisted machine learning algorithm that is impossible to do classically: use the quantum device as a special-purpose estimator for kernel functions, and feed these estimates into a classical computer where a kernel method is trained and used for predictions. Voila!

A quantum-assisted support vector machine finds useful decision boundaries for small datasets. The kernel of the support vector machine is the inner product of 2-mode squeezed states, where the phase of the squeezing depends on the input data.

But the story does not end there. Quantum computing can actually be used to analyze data directly in feature space, without relying on the convenient detour via kernels. This idea has been successfully used for quantum-inspired machine learning with tensor networks (check out this great paper [2] and its successors), and now we want real quantum systems to do the job. For this, we use a variational circuit to define a linear model in Hilbert space.

To explain this in more detail, consider as an example the binary classification problem of the figure above, where we have to draw a line — a decision boundary — between two classes of data. We can encode data points x into a quantum state |ϕ(x)>, which effectively maps it to a vector in Hilbert space. In a continuous-variable system, this vector is an infinite dimensional Fock state. A unitary transformation W applied to the quantum state is nothing else than a linear model with regards to that vector. With a bit of post-processing, W defines a linear decision boundary, or hyperplane, to separate the data in Hilbert space. From support vector machines, we know that a linear model is very well suited to analyze data in a feature space.

We can make the circuit depend on a set of parameters, W=W(θ) and train it to find the best linear decision boundary. These variational circuits have recently become a booming area of research in quantum machine learning [3,4,5]. With the theory of kernel methods, the approach of training circuits is enriched by a theoretical interpretation that can be used to guide our attempts of building powerful classifiers.

A quantum circuit (top) and its graphical representation as a neural network (bottom). Encoding a data point into optical modes maps it to an infinite-dimensional vector which can be interpreted as the hidden layer of a neural network. A variational quantum circuit together with measurements can then be used to extract two outputs from this layer, which are further processed to a binary prediction.

To summarize, using the Hilbert space of a quantum system for data analysis gives us a theoretical framework that can guide the development of quantum machine learning algorithms. It defines a potential road to show so-called “quantum supremacy’’ for real-life applications. Whether we can find cases in which this approach leads to useful classifiers is an exciting open question.

[1] B. Schoelkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, MA (2002).

[2] M. Stoudenmire and D. Schwab, Advances In Neural
Information Processing Systems, pp. 4799–4807 (2016).

[3] G. Verdon, M. Broughton, and J. Biamonte, arXiv:1712.05304 (2017).

[4] E. Farhi and H. Neven, arXiv:1802.06002 (2018).

[5] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, arXiv:1803.00745 (2018).

--

--