How neuroscience enables better Artificial Intelligence design

Justin Lee
The Startup
Published in
10 min readJul 20, 2018

Artificial Intelligence (AI) is evolving at light-speed.

Artificial systems are capable of outperforming human experts on many levels: crunching data, analysing legal documents, solving Rubix cubes, and winning games both ancient and modern.

They can produce writing indistinguishable from their human counterparts, conduct research, pen pop songs, translate between multiple languages and even create and critique art.

And AI-driven tasks like object detection, speech recognition and machine translation are becoming more sophisticated every day.

These advances can be credited to many developments, from improved statistical approaches to increased computer processing powers. But one element that is often overlooked is a combination of science and engineering: the use of both theoretical and experimental neuroscience.

Neuroscience has made several pivotal contributions to AI development. The two studies have a long and tangled history, due to their many similarities.

“The fundamental questions cognitive neuroscientists and computer scientists seek to answer are similar,” says Aude Oliva of MIT. “They have a complex system made of components — for one, it’s called neurons, and for the other, it’s called units.”

To build super-intelligent machines, we must gain a deeper understanding of the human brain. Equally, exploring AI can help us gain a better understanding on what’s going on in our own heads.

Identifying a common language between the two fields will create a “virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances,” writes Google DeepMind’s founder Demis Hassabis

Here’s why:

The early days

A brief history of neuroscience

What is neuroscience?

Neuroscience is a strand of biology based on the study of the anatomy and physiology of the human brain, including structures, neurons and molecules.

It studies how the brain works in terms of mechanics, functions and systems in order to create recognizable behaviors.

The success of today’s deep learning (a subset of AI) is mostly down to its architecture as opposed to its resemblance to the human brain; however, building a system that mirrored the simulations of the human brain was the starting point of artifical neural networks (ANNs).

In fact, the major developments in ANNs leant heavily on breakthroughs and achievements in psychology and neurophysiology departments.

What is an artificial neural network?

The human brain is one life’s greatest mysteries. To this day, scientists haven’t reached a clear consensus on how it works, despite studying it for centuries.

The two main theories are as follows:

The grandmother cell theory, which proposes that individual neurons are capable of retaining dense information and representing complex concepts.

By contrast, the second theory believes individual neurons are fairly simple, and the information required for processing complex concepts is distributed across multiple neurons.

ANNs loosely follow the second theory.

An ANN is a simplified, computational model of a biological brain, rather as a Tinkertoy construction might be a model of a real suspension bridge.

Basically, an ANN is way of detecting patterns. For a very simple example, imagine a machine that can only do one thing, namely tell whether a single numeral is a 3 or not.

So, it only has two outputs: True and False.

The input to the machine is any numeral from 0 to 9, and if it is working correctly, it gives the output True when the input is a 3, and False when any other numeral is fed to it. A slightly more sophisticated machine would allow ten different outputs, one for each numeral.

But, you may ask, how are we going to get numerals into and out of the ANN? Actually, it’s pretty straightforward. Remember those clunky old displays on digital watches and calculators?

They just used seven units, which could be either on or off. So as long we agree on the order in which the segments are represented, each numeral can be captured as a series of seven 0s and 1s.

They can be used to simulate brain behaviors, so that cognitive neuroscientists can test whether their theoretical models produce outputs which agree with the responses given by biological neural networks.

An ANN is composed of a number of interconnected units, or artificial neurons. Each artificial neuron is linked to several others, and can transmit signals along these connections.

A weight is associated with each connection, and affects the strength of the signal that is transmitted between neurons. These weights will increase or decrease during the course of learning, by analogy with the modifications in synaptic strength that underlie ‘plasticity’ in biological brains.

Units in a network are usually segregated into three classes: input units(these receive information to be processed), output units (this is where the results of the processing are found) and hidden units, which lie inbetween the first two classes.

These are arranged into layers, and in the simplest case there is only one layer of hidden units between the input and output layers. Signals from one layer will propagate as input to the next layer, or else exit the system via the output units.

An ANN mimics the biological brain in the sense that it acquires knowledge through learning, and stores this knowledge by adjusting the weights within the network.

However, some experts argue that the similarities amount more to loose inspiration, as biological neurons are far more complex than artificial ones.

The history of ANNs involve contributions from scientists in a variety of disciplines, including cognitive psychology, biological neuroscience and mathematics.

Within psychology, associationism was an important antecedent, and boasted a heritage stretching back as far as Aristotle….

A key starting point was a paper written in 1943 by neurophysiologist Warren McCulloch and mathematician Walter Pitts describing how neurons in the brain might work by modeling a primary neural network using electrical circuits.

In 1949, the psychologist Donald Hebb drew on ideas from associationism in developing a theory of learning which showed how biological operations in the brain could explain higher level cognitive behaviors.

According to Hebb, if one neuron repeatedly stimulates a second one, then the connection between them will strengthen — this is the notion of synaptic strength that is represented by weights in an ANN.

Interlude

So, ANNs suffered from the unrealistic hype, and also from insufficiently powerful computational resources to be practically useful in AI.

Meanwhile, machine learning work in AI provided alternative models of learning which appeared to be valuable, such as Hidden Markov Models.

In parallel, however, a series of advances to the task of training ANNs with more than just one hidden layer. Such multiple layers led to deep learning become increasingly feasible (i.e., in terms of training time) and accurate.

This was assisted by training deep NNs on Nvidia GPUs (2009).

In a somewhat independent strand of research, cognitive psychologists were exploring ideas of that association of ideas can be explained in terms of an associative structure.

Both cognitive science and neuroscience have evolved over the years, and recently they have started to overlap.

Cognitive science

Cognitive science is an offshoot of human psychology and is literally the study of cognition, or thought. It includes language, problem-solving, decision-making, and perception, especially consciously aware understanding.

Cognitive science started with those higher-level behavioral traits that were observable or testable and asked what is going on inside the mind or brain to make that possible.

Within this lies associationism. Associationism is one of the oldest and most widely held theories of thought.

Associationism

“When, therefore, we accomplish an act of reminiscence, we pass through a certain series of precursive movements, until we arrive at a movement on which the one we are in quest of is habitually consequent. Hence, too, it is that we hunt through the mental train, excogitating from the present or some other, and from similar or contrary or co-adjacent. Through this process reminiscence takes place. For the movements are, in these cases, sometimes at the same time, sometimes parts of the same whole, so that the subsequent movement is already more than half accomplished.”

This paragraph from the philosopher Aristotle is seen as the starting point of Associationism.

Associationism states that our mind is a set of conceptual elements that are organized as associations.

Aristotle examined the processes of memory and recalled to develop the four laws of association:

Contiguity: Things or events with spatial or temporal proximity tend to be associated in the mind.

Frequency: The number of occurrences of two events is proportional to the strength of association between these two events.

Similarity: Thought of one event tends to trigger the thought of a similar event.

Contrast: Thought of one event tends to trigger the thought of an opposite event.

Aristotle considered these laws to equate to common sense: i.e., the combined feel, smell, and taste of a strawberry are equivalent to a strawberry.

These laws, which were proposed over 2000 years ago, still serve as the fundamentals of today’s machine learning methods.

Associative structures

Associated learning is a constellation of related views, where a person learns to associate one thing with another due to previous experience with it. For instance, we associate the sea with sand.

So, an associative structure defines the bond that connects the two concepts.

There is a reliable, psychological relation that binds them together, and referencing one automatically activates the other (and vice versa) without the need to reference anything else.

Connectionism

Connectionism is a movement within cognitive science that explains intellectual abilities through the use of ANNs.

Connectionism is interesting because it provides an alternative to the widely-held theory that the mind is similar to a digital computer processing a symbolic language.

Where we are now

Work on using GPUs and deep learning for image recognition in 2012 ushered in the “deep learning revolution”, which has brought us innovations such as driverless cars, AI-powered assistants like Siri and Alexa, Google translate and much more.

Today, ANNs are used in several applications, based on the fundamental (but sometimes incorrect) assumption that if it works in nature, it will work in computers.

However, the future of ANNs lies in the development of hardware being specified for eventual use, like in the case of Deep Blue.

ANN development research is pretty slow. And due to processor limitations, today’s neural networks can take weeks to learn. This brings us to the recent influence of cognitive neuroscience on AI.

The limitations of Artificial Intelligence

Machine learning algorithms are set up with narrow mathematical structures. Through millions of examples, ANNs learn to perfect the strength of their connections until they can complete the task with high accuracy.

Because each algorithm is tailored to the task at hand, relearning a new task often erases the established connections. This leads to catastrophic forgetting: when the AI learns the new task, it overwrites the previous one.

The dilemma of continuous learning is just one challenge. Others are even less defined but arguably more crucial for building flexible, inventive minds.

Embodied cognition is a big one: the ability to build knowledge from interacting with the world through sensory and motor experiences, and creating abstract thought from there.

It’s the sort of common sense that humans have, an intuition about the world that’s hard to describe but extremely useful for the daily problems we face.

Even harder to program are traits like imagination. That’s where AIs limited to one specific task really fail. Imagination and innovation relies on models we’ve already built about our world, and extrapolating new scenarios from them.

How neuroscience can help

Firstly, neuroscience can help to validate existing AI techniques: if we discover an algorithm mimics an existing function in the brain, it doesn’t necessarily mean it’s the right approach for a computational system — but it does suggest we have discovered something important.

Neuroscience can also give a varied and complex source of inspirations for new algorithms and architectures to employ when creating artificial brains.

It might also be more mundane observations from cognitive psychology, such as the fact that humans will forget things that don’t matter to them.

But while logic-based methods and theoretical mathematical models have dominated traditional approaches to AI, neuroscience can complement these approaches by identifying classes of biological computation that could be critical to cognitive functions.

Another key challenge in AI research is transfer learning. To be able to process unique situations, AI agents need to be able to reference existing knowledge to make informed decisions.

Cutting-edge research is being undertaken to understand how this might be possible in artificial systems. For instance, a new type of network architecture called a ‘progressive network’ can use knowledge from one video game to determine another. This suggests there is massive potential for AI research to learn from neuroscience.

On the flipside, neuroscience can also benefit from AI research, like in the case of reinforcement learning. Modern neuroscience, for all its powerful imaging tools and optogenetics, has only just begun unraveling how neural networks support higher intelligence.

Distilling intelligence into algorithms and comparing it to the human brain “may yield insights into some of the deepest and most enduring mysteries of the mind,” writes Hassabis.

This mutual investment is crucial for progress in both fields. Researchers can explore neuroscience in the quest to develop AI and push forward scientific discovery.

And examining AI in correlation with neuroscience could help us explore some of life’s greatest mysteries, such as creativity, imagination, dreams and consciousness.

The best is yet to come.

--

--

Justin Lee
The Startup

Growth & Acquisition @HubSpot. In perpetual software update mode.