Neural spiking for causal inference and learning

Abish Pius
Computational Biology Papers
4 min readMay 6, 2023

Neural spiking for causal inference and learning | PLOS Computational Biology

Lansdell, Benjamin James, and Konrad Paul Kording. “Neural spiking for causal inference and learning.” PLOS Computational Biology 19.4 (2023): e1011005.

Quick Summary

The fact that neurons spike when driven beyond their threshold is often seen as a computational limitation. However, this study shows that spiking allows neurons to produce an unbiased estimate of their causal influence and approximate gradient descent-based learning. This spiking mechanism enables neurons to solve causal estimation problems and overcome confounders and downstream non-linearities. The research suggests that spiking neural networks can effectively solve the credit assignment problem and provides insights into the unique function of spiking in learning tasks.

Background

Spiking neural networks, which mimic the behavior of biological neurons, have been challenging to train and often perform less effectively than artificial neural networks with continuous activities. Spiking is typically seen as a disadvantage due to difficulties in propagating gradients through discontinuities. However, this article explores the computational benefits of spiking.

One computational problem both biological and artificial systems face is the credit assignment problem, determining which activities or weights are responsible for sub-optimal performance. This problem requires causal estimation, identifying which neurons are truly responsible for performance rather than simply being correlated with it. Confounding, where other variables affect both the variable of interest and performance, makes this estimation difficult.

Randomized perturbation, where neurons occasionally introduce extra spikes or remove spikes, is a gold-standard approach to causal inference. However, it comes with a cost of performance degradation, and it is unclear how neurons would know their own noise level for accurate estimation.

The article suggests an alternative approach based on the spiking discontinuity. By comparing the average reward when a neuron barely spikes versus when it almost spikes, the neuron can estimate its causal effect. The only difference between these cases is whether the neuron spiked or not, and any observed difference in reward can be attributed to the neuron’s activity. This allows neurons to estimate their causal effect efficiently.

The proposed method enables neurons to calculate gradients and adjust synaptic strengths, facilitating learning to maximize reward even in the presence of confounded inputs. The discontinuity-based learning rule provides a plausible account of how neurons learn their causal effect, offering insights into the computational benefits of spiking.

Results

  1. From dynamic neural networks to probabilistic graphical models. The graphical models describe how a neuron’s causal effect on the reward is formalized in a spiking neural network. The models consider the network’s dynamics, aggregate variables, and interventions to capture the causal relationships and dependencies between the variables.

2. Causal effects and finite difference approximation of gradients. The spiking discontinuity method estimates causal effects in a neural network by observing the discontinuity in the reward signal precisely at the neuron’s spiking threshold. This discontinuity, separate from confounding factors, allows for a meaningful estimation of the neuron’s causal effect on the reward.

Discussion

The research presented in this text focuses on the relationship between gradient-based learning and causal inference in neural networks. The authors propose a method called “spiking discontinuity” that allows neurons to estimate their causal effect using their spiking mechanism. This approach is inspired by the regression discontinuity design used in econometrics.

The authors emphasize that other neural learning rules also perform causal inference. Reinforcement learning algorithms, for example, estimate the effect of an agent’s or neuron’s activity on a reward signal. Causal inference is implicit in reinforcement learning, and various neuromodulators in the brain are known to represent reward or expected reward. These algorithms rely on independent noise, assuming that noise is private to each neuron and uncorrelated with other neurons. However, noise correlation across neurons is observed in the brain, and correlated noise can act as a confounder, making it difficult to determine which neuron’s activity is responsible for changes in reward. The spiking discontinuity method addresses this issue by allowing neurons to estimate their causal effect on the reward signal in an unbiased manner.

The text also discusses the limitations of perturbation-based methods for large-scale learning problems and the efficiency of backpropagation algorithms in solving credit assignment problems. The spiking discontinuity learning rule is not significantly more efficient than reinforcement learning algorithms but is more robust to noise correlations. It can be used alongside backpropagation-like learning mechanisms to address the credit assignment problem in neural architectures.

Furthermore, the authors note that the spiking discontinuity learning rule provides insights into other biologically plausible spiking learning models, such as those involving spike-based learning paradigms like spike timing-dependent plasticity (STDP). They also discuss the compatibility of the learning rule with known physiological properties of neurons, including irregular spiking regimes, sub-threshold dependent plasticity, and the dependence on neuromodulation.

In summary, the spiking discontinuity method allows neurons to estimate their causal effect on a reward signal in an unbiased way. It can be viewed as a form of causal inference in neural learning and offers insights into the role of noise correlations, compatibility with physiological properties, and its relation to other learning models in the brain.

FREE PDF to Text CONVERTER Click here: Convert pdf to text for free!

FREE ChatGPT Document Q&A: Get questions answered about any document type of any length!

Plug: Please purchase my book ONLY if you have the means to do so, I usually do not advertise, but I am struggling to stay afloat. Imagination Unleashed: Canvas and Color, Visions from the Artificial: Compendium of Digital Art Volume 1 (Artificial Intelligence Draws Art) — Kindle edition by P, Shaxib, A, Bixjesh. Arts & Photography Kindle eBooks @ Amazon.com.

--

--

Abish Pius
Computational Biology Papers

Data Science Professional, Python Enthusiast, turned LLM Engineer