Towards Quantum Machine Learning

Kartik Anand
5 min readAug 11, 2022

--

Quantum Computing — When and how it all started — how it will and is slowly conquering the Turing World:

Taken from here

Since the quantum theory has proved to be a really great theory in explaining the true nature of the universe, it has motivated researchers to rethink all aspects of their discipline from a new, rather complicated, and interesting perspective. It wasn’t until the 1980s, that quantum information became an independent research field.

In 1979, a paper was published by Paul Benioff proving and suggesting, a computing machine with quantum behavior could be built, followed by a book from Yuri Manin: “Computable and Non Computable”.

In 1981, Richard P. Feynman gave an interesting talk on “Simulating Physics with computers” and argued that nature being quantum mechanical should not be simulated by classical computers, and then he lays out some features that a quantum computer must have in order to be useful. Reading that talk would make you believe just how visionary and a great personality Feynman was.

With this push by Feynman, Manin and Benioff researchers started investigating the algorithms that would run on these machines. An algorithm was introduced in a paper in 1985 (a good read) by David Deutsch demonstrating the might that such a machine might possess over a classical one. He then goes on to generalize this algorithm with Richard Jozsa.

But soon it was found, that if small errors were allowed in the classical computation, it can be run at worst in O(n) time. A new algorithm was introduced in a paper in 1993 by Bernstein and Vazirani showing a clear separation between classical and quantum computation even when small errors were allowed. Making a further contribution to this paper, they described a quantum version of the Fourier transform. Shortly after came David Simon’s problem, that a quantum computer solves in linear time while a classical computer would require exponential time. Just prior to this, Seth Lloyd laid out how a quantum computer could be built.

Came Peter Shor.. he studied previous works on quantum algorithms and realized he can make a factorization algorithm using all the above algorithms and so he published a paper in 1994 (the factorization algorithm is used in RSA cryptography). This major breakthrough motivated researchers to look for more quantum algorithms all over the world, after having seen just how powerful quantum computers can be…

Fast forward 20 years, a new field came into existence, a name fancier than probably anything before: Quantum Machine Learning. Although a lot of foundational work was being laid out since the start of 1995, the name was first used in literature in a paper by Lloyd, Mohseni, and Rebentrost: “Quantum Principal Component Analysis”. Since 2013, the interest in this literature has increased significantly and has become its own subdiscipline of quantum computing, below is a timeline graph showing the same:

Taken from “Supervised Learning with Quantum Computers” — Schuld, Petruccione

Four Approaches to Quantum Machine Learning:

A topology was introduced by Aimeur, Brassard and Gambs in their paper: “Machine Learning in a quantum world: Advances in Artificial Intelligence”, distinguishing whether one assumes data to be generated by a quantum (Q) or a classical (C) system, and if the learning device is quantum (Q) or classical (C).

CC (Classical data generation, Classical learning device)

It’s obviously the conventional approach to machine learning but here, it relates to machine learning based on methods borrowed from quantum information. An example being: Applications of tensor networks. Similarly, there are other types of ‘quantum-inspired’ machine learning models, with varying degrees of foundation in rigorous quantum theory.

QC (Quantum data generation, Classical learning device)

Again pretty straightforward from the name, this approach targets how machine learning can help out with quantum computing.

To state some examples:

i) Studying the internal state of a quantum computer with the minimum number of measurements possible. Measurement data can be analyzed by conventional machine learning methods.

ii) Discriminating between quantum states emitted by a source, or transformations executed by an experimental setup — have a lot of applications.

CQ (Classical data generation, Quantum learning device)

This is the approach that has been investigated the most out of the four of them and is the most commonly being researched. This approach aims at achieving significant speed-up in machine learning tasks using quantum versions of traditional machine algorithms on classical data, which is encoded in a quantum computer using certain techniques (an active area of research) prior to the algorithm execution. This requires a quantum-classical interface, which is a challenge being researched and will be discussed in the series of these blogs…

The strategies for data mining using quantum algorithms range from translation of classical machine learning models into quantum algorithms — to genuinely new models derived from the working principles of quantum computers.

I will mostly be concerned with supervised learning in this series, but will parallelly be publishing some blogs on unsupervised learning — (Quantum Generative Adversarial Networks, Quantum Autoencoders, Quantum Boltzmann Machines… and much more to come)

QQ (Quantum data generation, Quantum Learning device)

You get the idea — quantum data is processed on a quantum device.

This further can have two scenarios:

  • Quantum data derived from measurements of a quantum state in a physical experiment, and passing it to a separate quantum processing/learning device
  • Quantum device is used to simulate a quantum system and takes the state of the system as an input to a quantum machine learning algorithm being executed on the same device — This way quantum algorithm has immediate access to the quantum state, resulting in exponential speed-up by design.

This stuff indeed is quite interesting, however very few results have been published in this direction. There are a number of interesting open problems in this approach that will not be discussed in this series,.. but some blogs will keep coming side by side (wink, wink).

I think that’s it for now, just needed an intro for a little motivation into what kind of black magic we will be dealing with in this series.. will wrap up with this intro by quoting something on QQ approach from a paper published in 2016 by Dunjko, Taylor, and Briegel — “Quantum-Enhanced Machine Learning”:

…Here, the interaction can be fully quantum, and even the question of what it means “to learn” becomes problematic as, for instance, the agent and environment may become entangled…

— Kartik Anand

Leave a comment below if you have any questions, feedback, suggestions or corrections. Will soon be back with an another blog of the series or probably a research paper review as a side-blog.. Until then, keep learning and stay hydrated folks!

--

--

Kartik Anand

I will be writing on Deep Nets, Causality and Quantum Computing.