QML Day-3 : How does Mathematics help in the field of Machine Learning and Quantum Computing?

Shoaib Attar
5 min readAug 3, 2023

--

Photo by Thomas T on Unsplash

Machine learning and quantum computing both are interesting and trending topics of the 21st Century which are being used to solve the extremely complex problems that were never thought possible before. both of these miracles of modern science have their foundation based on ‘Mathematics’. So in this article we will dive into the amazing mathematics behind them.

Mathematics behind Machine Learning :

Machine learning has its foundation based on maths where concepts of Statistics, Calculus, Probability and Linear Algebra make up the building blocks of Machine learning. And training of the ML model is based on utilising these concepts to our advantage.

Statistics

Statistics is the branch of maths which is associated with visualising data and drawing useful information from it. which is crucial in machine learning.

Machine learning starts with a sample data which is the input for the model and this data is then projected into a higher dimensional feature space and using statistical methods a relationship is drawn between dependent and independent variables which can then be used to do predictive analysis the prediction of required output. As we saw in the previous article(which you will get here) that supervised learning involves training a model based on sampled data as input along with some sampled outputs is given to model where statistical methods like Linear Regression, polynomial regression models can be used to make continueous dependent variable models.

For example: Here is the linear regression graph of data having salary(dependent variable) being predicted based on work experience(independent variable).

X :Work Experience
Y: Salary

Calculus

Calculus is extremely important in understanding how a particular data is changing across a given point. This power of calculus can be very helpful in machine learning models to determine the optimum values of parameters to optimise cost functions which minimise the error associated with these cost functions. The concept of gradient descent is used here in which a given cost function is partially differentiated w.r.t required parameters which helps in finding it’s minima. Then by plugging these values in the cost function we get the least error.

W is weight parameter and b is bias

For logistic regression:

These parameters then minimised using gradient descent to get:

Weight parameter's gradient descent
Bias parameters gradient descrent

After plugging the appropriate values we can get the required predictions.

Probability

Probability is extremely useful in the field of machine learning as it can help in unsupervised Learning models to recognise patterns based on probabilities. As Unsupervised Learning has only input data being fed to the model then the model has to identify patterns in the system. Probability methods like Logistic Regression can be extremely useful in such cases which is based on categorical dependent variables.

For the data of students predicting their chances of passing the exam based on hours they studied, the output is in discrete form and only the number of hours are being given(without any sampled outputs of passing) which makes it a problem of pattern recognition and need for logistic regression.

Logistic regression curve of the student's chances of passing exams

Thus probability can be very helpful in cases of Unsupervised Learning models of Machine Learning.

Linear Algebra

As we said previously, machine learning involves the projection of sampled data onto a higher dimensional feature space. But in order to model this feature space which involves multiple variables being operated simultaneously, Linear Algebra really helps to model such multivariate feature spaces in an efficient manner. To perform calculations in higher dimensions ‘Tensors' are used which is very useful tool of linear algebra. Thus using linear algebra problems models for example Support Vector Machine(SVM) is a model of unsupervised machine learning which involves finding a hyperplane which clearly distinguishes between the data and these higher dimensional calculations are not possible without linear algebra.

Tensors

Mathematics behind Quantum Computing

Quantum computing involves qubits which are transformed and manipulated as per the required operations. But each qubit has a certain quantum state which exists in Hilbert Space( it is a type of vector space) and in order to perform operations on qubits we need to apply unitary transformation over them so that they can keep existing in same Hilbert space.

Qubits are represented through a Bra-Ket notation and they can also be represented through row and column matrices respectively matrices.

Bra-Ket Notation

Also each basis can be represented through a vector which adds new insights into representations

Each qubit exists in superposition of multiple states and each state’s coefficient square gives the probability of that state.

For above equation Ψ is in superposition of |0> and |1> quantum states with probability of |0> being α^2.

In this way linear algebra and probability also make the foundation for Quantum computing.

Conclusion:

Mathematics is very crucial in formulating both machine learning and quantum computing and mathematical tools help in making the base of these ground-breaking technologies even more powerful…..

This was my day 3 of 30 days quantum challenge by QuantumComputingIndia

#quantum30 #quantumcomputing

Sources:

Book: Chris Bernhardt. 2019. Quantum Computing for Everyone. The MIT Press.

YouTube:

  1. https://www.youtube.com/watch?v=8onB7rPG4Pk
  2. https://www.youtube.com/watch?v=IrbJYsep45E&t=348s

Wikipedia:

https://en.wikipedia.org/wiki/Logistic_regression?wprov=sfla1

--

--