Structural risk minimization for quantum linear classifiers

Quantumsumm
3 min readJun 27, 2023

By Casper Gyurik, Dyon Vreumingen, van, and Vedran Dunjko

2023–01–13

Check out my podcast!

https://podcasters.spotify.com/pod/show/quantumsumm/episodes/Structural-risk-minimization-for-quantum-linear-classifiers-e268df0

This paper studies how to balance between training accuracy and generalization performance (also called structural risk minimization) for two prominent QML models introduced by Havlíček et al. (Nature, 2019), and Schuld and Killoran (PRL, 2019).

The authors first use relationships to well understood classical models to prove that two model parameters — i.e., the dimension of the sum of the images and the Frobenius norm of the observables used by the model — closely control the models’ complexity and therefore its generalization performance. They then use ideas inspired by process tomography to prove that these model parameters also closely control the models’ ability to capture correlations in sets of training examples.

Frobenius Norm

The Frobenius norm is a norm of a matrix that is defined as the square root of the sum of the squares of the elements of the matrix. It is a type of vector norm that can be applied to matrices. The Frobenius norm is often used in numerical linear algebra and machine learning.

The results give rise to new options for structural risk minimization for QML models. By controlling the complexity of the model, it is possible to improve the generalization performance of the model without sacrificing training accuracy. This is an important step towards developing QML models that can be used in real-world applications.

The authors then use this result to show how to control the complexity of quantum linear classifiers using the Frobenius norm. They propose two methods for doing this:

  • Method 1: The authors propose to choose the observables used by the classifier so that their Frobenius norm is minimized. This will minimize the complexity of the classifier, and it will also lead to improved generalization performance.
  • Method 2: The authors propose to add a regularization term to the loss function of the classifier. This regularization term will penalize the Frobenius norm of the observables used by the classifier, and it will also lead to improved generalization performance.

The authors then validate their theoretical results using numerical simulations. They show that the Frobenius norm can be used to control the complexity of quantum linear classifiers, and they also show that this can lead to improved generalization performance.

The main findings are:

  • The Frobenius norm of the observables used by a quantum linear classifier is closely related to the complexity of the classifier.
  • The complexity of a quantum linear classifier can be controlled by minimizing the Frobenius norm of the observables used by the classifier.
  • Minimizing the Frobenius norm of the observables used by a quantum linear classifier can lead to improved generalization performance.

The paper’s findings have important implications for the development of QML models. By understanding how to control the complexity of these models, it is possible to improve their generalization performance and make them more suitable for real-world applications.

https://quantum-journal.org/papers/q-2023-01-13-893/

Originally published at http://quantumsumm.wordpress.com on June 27, 2023.

--

--

Quantumsumm

Quantum computing paper summaries, I try to give more detail than an abstract and keep it in plain English.