Neuroscience and AI: A beautiful entanglement

Diego Aguado
5 min readApr 21, 2020

--

Inspiration, contributions and interactions among each other.

Copyright © egosys.be

A first example of the neuroscience-AI relationship

Neuroscience discoveries have laid the foundation of what can probably be described as the algorithm of the decade: neural networks. As you might be aware, these computational models have defined state-of-the-art results for countless tasks across different industries and research spaces.

As their name indicates they are inspired by neurons, the brain’s building block. Surprisingly, the mathematical model used to simulate these cells is quite simple. An artificial neuron, or point neuron, will have several inputs which will be pondered with specific weights. The aggregation of them will then be considered to determine if the neuron should fire or not; the inputs in the model represent synapses an actual brain cell could receive. If the accumulation of these depolarizes it enough, positively changing the voltage, it will fire. In a biological neuron this is called action potential, in the artificial one this is the result of the non linearity associated. In other words, if the cell receives “enough” inputs, surpassing a certain threshold, it will show external activity.

A simple approach

This model is a very simplistic one. The dynamics of synapses in the brain are, as far as we understand them, much more complicated. For example, synapses that happen far away from the post-synaptic cell’s body are unlikely to depolarise it enough to make it fire. On the other hand, synapses close to the cell body (the soma) are more likely to do so. The former is due to the fact that the synapse needs to travel through the cell and reach the soma. A synapse traveling across the cell’s body will lose electric potential on its way to the cell’s body, making it more difficult for the cell to fire. In order to model this behaviour in a more accurate manner, it would be necessary to consider changes over space and time: electricity traveling through a material over time. Moreover, neurons usually have several bifurcations where the electricity could travel through; This behaviour can be modelled through Cable Theory, developed by Wilfrid Rall. Yet the artificial neuron is much more simplistic. In order to integrate the hypothesis mentioned earlier and have more complex models, some rely on differential equation to describe neural activity yet the simplistic approach has delivered insanely powerful computational models.

Depiction of Cable Model. Adapted from: Bower JM, Beeman D (2003)

If you are familiar with artificial neural networks (ANNs), the model of the artificial neuron was just a recap of the basics. It shows how knowledge on the neuroscience space was a cornerstone for current state of the art algorithms. Sure ANNs had been around before without being very useful until back-propagation came around as well as more powerful hardware to train them. Nonetheless, there are other equally, or even more, interesting examples of how neuroscience has provided powerful resources to Artificial Intelligence.

Vision, sleeping and playing agents.

Neuroscience and AI have a stronger relationship than just a way of modelling neurons and this relationship has been growing stronger over the last years.

Another popular technique that provided AI the ability to take big steps in solving vision tasks was the formulation of convolutional neural networks (CNN). The key factors that allowed neural networks to define new benchmark results for vision problems by using CNNs were the formulation of spatial image processing and the subsequent processing in a hierarchical fashion. The inspiration for this modelling came from the understanding of the hierarchical dynamics of layers in the brain that process visual information.

At DeepMind, Neuroscientist and Machine learning researchers have taken understanding of biological computations and applied it to artificial ones (Hassabis, Demis, et al, 2017. ), an example of groundbreaking and inspiring work originated at Deep Mind that seeks the interaction of both fields. In this paper, the authors show how neuroscience-based ideas brought ground breaking results.

Drawing inspiration from the role of sleep during learning, they included the concept of ‘replay’ on reinforcement learning algorithms that were previously failing to learn to play videogames without it. Particularly, the inspiration comes from the understanding of the hippocampus’s role during learning.

This brain structure is involved in the encoding of episodic and spatial information that was acquired during a certain activity for later tasks. The information is replayed during resting states or (non-REM) sleep between performing the task to be learned. This replaying strategy allows the neocortex to solidify this knowledge when actually performing the task again. The attempted mimicking of this system, in ML, could be one-shot learning and replay buffer for RL models.

On the Neuroscience space, considerable research has been done on the topic of learning and memory consolidation by numerous authors. A particularly important one is O’Neill, Joseph, et al, 2010. In this work it is shown how these hypothesis are tested through experiments of rats traversing a known/unknown maze before/after sleep or rest.

In other words, the proposal of the paper is that the process of consolidating short term memory into long term memory is done via ‘re-firing’ of the original network during activity during rest or sleep stages. The experiences that took place while awake are replayed during these stages.

Why does it work?

One big component that helps explain why incorporating neuroscience knowledge into artificial intelligence works is due to inductive bias. After proving right some hypothesis throughout experiments, this previously gained knowledge can be used as a starting point, basis or framework for further learning. An example of this was previously explained when discussing the convolutional layer for vision tasks.

These were some examples on how Neuroscience has provided Artificial Intelligence with powerful tools. The converse relation will be explored on further readings but as one might (or might not) expect, it has been proposed to be almost equally fruitful.

--

--

Diego Aguado

On Machine Learning, Artificial Intelligence and Neuroscience. Github, Twitter: @DiegoAgher.