I decided to start building a kind of “master index” including descriptions of all my Medium articles.
In the article Deep Learning Explainability: Hints from Physics I show that deep learning and renormalization group theory are deeply interconnected. More specifically I describe in some detail a recent article showing that deep neural networks seem to “mimic” the process of zooming-out that characterizes the renormalization group process.
Deep Learning Explainability: Hints from Physics
Deep Neural Networks from a Physics Viewpoint
In the article Neural Quantum States, I discuss some recent research on the interface between machine learning and theoretical physics. I describe how Restricted Boltzmann Machines (RBMs), building blocks of deep neural networks, can be used to compute with extremely high accuracy the state of lowest energy of many-particle quantum systems (among other things).
Neural Quantum States
How neural networks can solve highly complex problems in quantum mechanics
In Machine Learning and Particle Motion in Liquids: An Elegant Link, I argue, based on recent findings, that by thinking of the stochastic gradient descent algorithm (or the mini-batch gradient descent) as a Langevin stochastic process with an extra level of randomization (implemented via the learning rate), one can better understand the reasons why the stochastic gradient descent works so remarkably well as a global optimizer.
Machine Learning and Particle Motion in Liquids: An Elegant Link
The Langevin Equation as a Global Minimization Algorithm
In Connections between Neural Networks and Pure Mathematics, I argue that a few powerful theorems involving nested/composite functions, proved by Kolmogorov (1957), Arnold (1958) and Sprecher (1965), help to explain why neural networks can be used to represent almost any process in nature.
Connections between Neural Networks and Pure Mathematics
How an esoteric theorem gives important clues about the power of Artificial Neural Networks
In The Approximation Power of Neural Networks, the mathematical apparatus underlying the famous representation theorems of artificial neural networks is discussed.