New Hinton Nature Paper Revisits Backpropagation, Offers Insights for Understanding Learning in the Cortex

Synced
SyncedReview
Published in
5 min readApr 24, 2020

Although Turing awardee and backpropagation pioneer Geoffrey Hinton’s interests have largely shifted to unsupervised learning, he recently co-authored a paper that takes a look back at backpropagation and explores its potential to contribute to understanding how the human cortex learns.

Hinton and a team of researchers from DeepMind, University College London, and University of Oxford published the paper last Friday on Nature Reviews Neuroscience. Their main idea is that biological brains could compute effective synaptic updates by using feedback connections to induce neuron activities whose locally computed differences encode backpropagation-like error signals.

Backpropagation of errors, or backprop, is a widely used algorithm in training artificial neural networks using gradient descent for supervised learning. The basics of continuous backpropagation were proposed in the 1960s, and in 1986 a Nature paper co-authored by Hinton showed experimentally that backprop can generate useful internal representations for neural networks.

A spectrum of learning algorithms

The introduction of backpropagation also generated excitement in the neuroscience community, where it was viewed as a possible source of insight on understanding the learning process in the cortex. How the cortex modifies synapses to improve the performance of multistage networks remains one of the biggest mysteries in neuroscience.

Although we know that human brains learn by modifying the synaptic connections between neurons, synapses in the cortex are embedded within multi-layered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. In artificial neural networks, backprop tries to solve this problem by computing how slight changes in each synapse’s strength change the network’s error rate using the chain rule of calculus.

The relevance of backpropagation to the cortex however had been in doubt for some time. The method was viewed as biologically problematic, as it was classically described in the supervised learning setting while the brain is thought to learn mainly in an unsupervised fashion and appears to use its feedback connections for different purposes. Moreover, decades after it was first proposed, backpropagation had still failed to produce truly impressive performance in artificial systems.

Backprop made its comeback in the 2010s, contributing to the rapid progress in unsupervised learning problems such as image and speech generation, language modelling, and other prediction tasks. Combining backprop with reinforcement learning also enabled significant advances in solving control problems such as mastering Atari games and beating top human professionals in games like Go and poker.

The successes of artificial neural networks over the past decade along with developments in neuroscience have reinvigorated interest in whether backpropagation can offer insights for understanding learning in the cortex. The new paper proposes that the brain has the capacity to implement the core principles underlying backprop, despite the apparent differences between brains and artificial neural nets.

The researchers introduced neural gradient representation by activity differences (NGRAD), which they define as learning mechanisms that use differences in activity states to drive synaptic changes.

To function in neural circuits, NGRADs need to be able to coordinate interactions between feedforward and feedback pathways, compute differences between patterns of neural activities, and use these differences to make appropriate synaptic updates. Although it is not yet clear how biological circuits could support these operations, the researchers say that recent empirical studies present an expanding set of potential solutions to these implementation requirements.

Empirical findings suggest new ideas for how backprop-like learning might be approximated by the brain.

The NGRAD framework demonstrates that it is possible to embrace the core principles of backpropagation while sidestepping many of its problematic implementation requirements. And although the researchers focused on the cortex because many of its architectural features resemble that of deep networks, they believe NGRADs may be relevant to any brain circuit that incorporates both feedforward and feedback connectivity.

Many pieces are still missing that would firmly connect backprop with learning in the brain. Nonetheless, the situation now is very much reversed from decades ago, when neuroscience was thought to have little to learn from backprop. Now, the researchers believe, learning by following the gradient of a performance measure can work very well in deep neural networks: “It therefore seems likely that a slow evolution of the thousands of genes that control the brain would favour getting as close as possible to computing the gradients that are needed for efficient learning of the trillions of synapses it contains.”

The paper Backpropagation and the Brain is available on Nature Reviews Neuroscience. The first author is Timothy P. Lillicrap, and the research team also includes Adam Santoro, Luke Marris and Colin J. Akerman.

Journalist: Yuan Yuan | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global