Is our brain a Bayesian or Deterministic NN

Purvanshi Mehta
The Startup
Published in
2 min readFeb 4, 2020

--

Before reading ahead I am in no way claiming that our brain is a Neural Network. But I ASSUME that a Neural network is the simplest approximation of neuronal connections in our brain.

This blog post tries to answer the question, whether our brain learns point estimates or it learns a distribution for computation.

Synapses

The synapse is a functional connection between a neuron and another neuron. The synaptic gathers the information from multiple stimuli, combines it and transmits it to other neurons. This is basically what a single node in MLP does.

It was believed in the neuroscience community that the synapses learn point estimates for computation. But the paper Probabilistic Synapses by Aitchison et.al. proposed for the first time that in high — noise regime keeping track of probability distributions not point estimates is the optimal way of learning.

Probabilistic Synapses

The synapses keep track of two variables-Mean and variance thus learning a log-normal distribution for computation. The paper gives two main facts -

  1. Bayesian Plasticity
  2. Synaptic Sampling

Bayesian Plasticity

They prove that the synapses keep track of the distributions(mean and variance) and not just the mean value of weights. The second concept they used is how the learning rate of a neuron is directly proportional to its uncertainty. If the synapse is more uncertain about its prediction then the update in that synapse should be more than the update in the weight of a synapse which has lower uncertainty. Here alpha is the learning rate of i neuron and S is the standard deviation, the denominator is just a normalizing term.

How delta rule (constant learning rate) and optimal learning rule (how changing the learning rate with respect to uncertainties) produce updates in weights. Here as w2 is uncertain about its prediction changes should be more to its weight and vice versa for w1

Synaptic Sampling

This is the theory of how weights are sampled from distribution and how the synapses uncertainty should fall as the presynaptic firing rate increases (If the previous synapse is more certain about its prediction then the certainty of the current synapse increase)

Relation to Bayesian Neural Networks

The whole field of Bayesian Deep Learning argues the importance of learning uncertainties along with the weights. This paper just validates the neuroscience aspect of it.

The paper relating the learning rate to uncertainty was recently accepted in ICLR-Uncertainty-guided Continual Learning with Bayesian Neural Networks. They also relate how uncertainty is a measure of importance and perform continual learning in this scenario by regularizing weight with low uncertainty.

If you want to know more about the experiments performed in detail please refer to my slides or the paper (link above).

--

--