Are Artificial Neural Networks like the Human Brain? And does it matter?

Lili Tcheang
Digital Catapult
Published in
8 min readNov 7, 2018

­­

I work for a research technology organisation which prides itself on being at the cutting edge of the latest innovations. We often have clients come to us with some exciting and ambitious ideas as to what they would like to achieve with AI. We are hearing a lot about AI becoming more capable and taking on human-like capabilities, to the point where people imagine AI systems are much more generalisable than they really are. But in reality are AI neural networks anything like our brains?

This wide misconception that AI is a universal problem solver, often smarter than ourselves and able to produce faster and better answers has been partly fuelled by a less tech-savvy media industry reporting on some actually quite impressive advancements albeit with the unmentioned caveat that they apply to very narrow applications, and will not generalise to any problem as a whole. One such example is the success of the Deep Learning algorithm AlphaGo with the Chinese strategy game Go.

In this article, I am going to explore some of the similarities and differences between neural networks and the brain, and the origins of the misconception that neural networks operate like human brains.

The misconceived beliefs around AI and the brain

One of the more well-known architectures of machine learning, artificial neural networks, are often reported to be somewhat analogous to the brain, and it’s an easy step from there to imagine that they must process information in a similar way to the brain too. However, these are over-simplifications. In fact whilst ideas from neuroscience have inspired the design of artificial neural networks, what isn’t captured by these models is the nuance in complexity and elegance of the human brain.

The similarities and differences between an artificial neural network and its inspiration — the biological neuronal circuitry found in the brain, can be explored by first examining the organisation of inputs and outputs at the single neuron level, looking at differences in connectivity and finally illustrating the existence of different cell types. To keep terms distinct, I will refer to the machine learning assembly as an artificial neural network (ANN), and the biological assembly as a biological circuit (BC).

An initial point I would like to highlight is that AI as a term is often assumed to mean one thing, but ultimately covers a spectrum of different terms and techniques. Often these are mutually exclusive depending on the domain one happens to be working in (see Dan Staff’s post here). This ranges from hard-coded rules-based functions, to the more dynamic neural network based approaches we now see implemented in Deep Learning.

The Single Neuron

Let’s start by looking at the two types of neuron which are each the single unit component of both the BC and the ANN. A biological neuron is a hugely complex component with internal machinery, chemical and physical processes, and a developmental and evolutionary history. Figure 1 is a schematic of the biological neuron and the artificial neuron. Currently ANNs are made of artificial neurons that are thought to be analogous to the biological neuron, which consists of a neuronal cell body — where the input to the node represents the dendrites and the outputs represent the axon. This concept of input-output is remarkably similar to the cortical circuit, where synapses on dendrites receive inputs, and axonal synapses put out connections with the dendrites of neurons further along the chain. Whilst dendrites and axons are anatomically and physiologically distinct in the BC, in ANNs, they are the same connections between neuron layers and are only defined by whether they input the neuron cell body (therefore a dendrite) or output the neuron cell body (therefore an axon).

Figure 1: Visual comparison of a biological neuron to an artificial neuron. Source: https://www.researchgate.net/figure/7947079_Analogy-between-artificial-neuron-and-biological-neuron

Now let’s take a further look at the differences. In the ANN, inputs to the nodes are weighted by a process known as backpropagation, and the output is often a function of the weighted sum of the inputs. This is a gross simplification of the nuances of the electro-chemical processes driving activity in a biological neuron. There is some evidence suggesting backpropagation facilitates synaptic strengthening in the biological neuron, however it remains poorly understood and needs a larger body of evidence to be accepted more widely. For the biological neuron, in order to produce an output, the voltage level within the cell-body must reach a threshold that initiates an action potential as the output, and whose voltage level is always a constant at the axon. To summarise, unlike an artificial neuron whose output level can be varied, biological neurons always fire at a constant voltage level. Despite this constant output, the biological neuron has the capacity to excite a vast number of neurons in the vicinity and further regions of the brain due to the reach of its connections (~86 billion neurons in the human brain have approximately 1000 synaptic connections along the axon). Similarly at the dendrites, the inputs of biological neurons, non-linear computations are performed that are neither fully understood, nor reproduced by ANNs. Many neurons in the brain have a dendritic tree of remarkable morphological complexity and a well-balanced interaction of different ion channels that depending on the exact input timing and location, can promote some inputs while it suppresses others.

Differences in Connectivity

This is just a starting point, there are numerous other nuances in the architecture of a biological neuron that are not seen in artificial neurons. One such thing is myelination, where a myelin sheath coats the axon and through its electrical properties, allows action potentials (the output of a cell body) to travel further and faster down a long axon preventing the signal from dissipating over long distances. Myelin is particularly useful in coordinating global brain activity, whilst integrating signals in local circuits. What is also not captured in an ANN, is the temporal cadence of cells firing in the brain during these activations. Furthermore, there are examples of myelination being associated with specific forms of learning where blocking of this process prevents full acquisition of memories or skills.

Figure 2: Saltatory conduction occurs only in biological neurons. Source: https://sites.google.com/site/etec512cognitiveneuroscience/home/neurons?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog=1
Figure 3: A typical ANN architecture where connections exist only between adjacent layers, unlike in the brain. Source: https://stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network-and-w

A Plethora of cell types

Last of all, there is a plethora of different types of neuronal cells with differing anatomies, from Golgi cells, pyramidal cells, purkinje cells and so on that populate physically distinct brain regions, from the mid-brain, the cerebellum and the neocortex to name but a few. The point of showing Figure 4 is to illustrate the richness and diversity of neuron types in BCs compared with the relative uniformity of ANNs. Referring back to myelin, myelin is made up of oligodendrocytes, which are one of a class of cells known as glia (from the greek for glue). Oligodendrocytes along with astrocytes and microglia have additional distinct functions including immune support and transmitter recycling machinery. Interestingly, glial cells also have ion channels allowing depolarizations and different forms of electrochemical transmission.

Figure 4: A subset of the types of neurons that exist in the brain. Source: http://www.mind.ilstu.edu/curriculum/neurons_intro/neurons_intro.php

None of this nuance from the micro-level differences of different types of cells, to the macro-level differences in these brain regions and their connections is captured accurately within an ANN. Whilst a rudimentary representation of different architectures have evolved from the necessity for optimisation for certain problems, such as LSTMs for capturing sequential information to CNNs being used to capture images, there is no compelling evidence that brain structures have a close resemblance to these artificial structures. One small caveat is that the lower layers of a CNN do appear to crudely approximate the lower layers of visual cortex devoted to visual processing, but this comparison quickly breaks down past the first one or two layers [1]. These similarities entice many to interpolate that ANNs are similar enough to the BC that they are an accurate representation of the human brain. Instead brain structures are far more complex and interconnected in ways that are not yet completely understood. And this last point is a fundamental problem as to why it is hard to emulate a human brain. It’s that the human brain itself and human intelligence is not as well understood as is often portrayed.

Summary

There is a certain romanticism in the quest to building a human brain with the pursuit of a deeper understanding of ourselves and the meaning of consciousness core to this drive. Historically, computational neuroscientists have tried to mimic aspects of the brain in order to achieve just this. However, the common aphorism “All models are wrong but some are useful” is most apposite and is fully understood within the research community in these pursuits. Whilst the advancement of more computational power enables ANN structures to more closely resemble BCs, there is a general consensus that there is a long way to go before reaching an acceptable likeness that could pass as a fully formed brain. Instead the current progress with neural network models has allowed others with more practical application to build individual models with specialised tasks in the domains of image and speech recognition, self-driving cars and autonomous robots to name but a few. Many in the field are optimistic about a future for AI concurrent with the developments of home computers that arrived almost 50 years ago and are now ubiquitous in our daily lives.

The biggest concern from this community is not the threat of AI taking over but of public expectation not matching current capability and AI not being regarded as a breakthrough technology, due to the fact that it requires adaptation of its surroundings to take full advantage of its potential. A similar story has been played out in the past with electric dynamos replacing steam engines to drive machines within factories. What had initially happened was that dynamos were put in place of the steam engines they replaced, with almost no gains in productivity. It was not until 30 years later, when factories were designed around this new technology that significant gains were really felt across the industry [2]. The hope is that the public hasn’t been too misled by the promise, and will have the patience to invest in the requirements and changes needed to make full use of this exciting technology. The general consensus in the field is that we are at the cusp of a new revolution in computing machines, let us hope that we have the fortitude to push through with it.

[1] https://arxiv.org/pdf/1806.02888.pdf

[2] https://www.bbc.co.uk/programmes/p057xsl0

Thanks to Peter Bloomfield, Daniel Staff, Daniel Justus, Phil Young, Anat Elhalal, Libby Kinsey and Marko Balabanovic.

--

--