Is I.T. All Between Your Ears?

On the Computational Metaphor for Understanding the Brain

NeuroTechX Content Lab
NeuroTechX Content Lab
14 min readJul 25, 2021

--

Take a morning walk and look up at the birds. Watch as they leap from a perch, swoop through the wind and land seamlessly on the ground ahead of you, where they scan their surroundings until taking off again, cautious of the approach of a yawning mammal. How on earth does all that dynamic, reactive behaviour emerge from a brain that weighs as much as a graphite pencil? How do we even begin to answer this question? The members of one particular neuroscience tribe would suggest that we think of these phenomena in terms of information processing and computation.

Neuroscience and computation

Computational neuroscience seeks to understand nervous systems at many different scales, including the biophysical, the circuit and the systems levels. Like its sibling discipline, theoretical neuroscience, it aims to develop mathematical models, simulations, and theories of brain function. The term “computational” is a nod to the computer’s central role in the discipline. It is used as a modelling tool, to help neuroscientists abstract the brain’s complex systems in order to predict and learn how they function. It is also utilised as a brain-data analysis tool to extract useful insights and patterns from complex neural data. Importantly, though, the term also reflects a shared sense amongst the field’s members that the brain is itself a kind of naturally evolved computer.

Despite the obvious differences between the two systems, the metaphor of the brain as a computer remains. The computer is a digital machine forged from silicon by humans. It has a fixed serial processing architecture, consisting of billions of transistors. It can perform billions of operations per second and, as such, is able to quickly run specialised human-authored programs with precise behaviours and outputs. Think calculators: a computer program for crunching numbers in an instant, so your poor brain doesn’t have to.

The brain, on the other hand, is an analogue system, constructed organically by evolution . A primary building block of the brain is the neuron. Far slower than a transistor, the neuron can perform around one-thousand operations per second. However, the brain’s hundreds of billions of neurons are organised in hugely complex, parallel, and sequential networks, in which each neuron is connected to ten-thousand others. This allows the brain to perform many massively complex processes in parallel, while specializing in certain types of sequential functions as well. This network also self-organises, and can adapt to changes in its environment in order to express new and improved functionality. Think of a child learning to ride a bike: feet engaged with pedalling, hands in charge of steering and braking, eyes on the watch for passers by, and the mouth ready to make sure Mum and Dad are seeing this quite incredible moment of sheer cycling genius. Unlike the specialist computer, your brain is the ultimate generalist.

Despite the stark differences between brains and computers, the intelligent behaviour of animals does seem to be underpinned by complex computations in the brain. These computations combine inputs from, and pre-existing models of, the animal’s dynamic environment, to produce outputs in service to the animal’s prime directives — namely survival, and the propagation of its genes to the next generation. Computational neuroscience thus pursues an information-processing framework for developing theories of how neural networks can produce complex effects like vision, language and learning.

What is the argument for this computational approach to neuroscience? What is the utility of the computational metaphor? And how might it miss the mark in advancing our understanding of the brain?

A short history of the computational metaphor

In 1665, the Danish anatomist Nicolas Steno suggested that we should consider the brain as a machine and take it apart to understand how it works. Since then, these machines have taken many familiar forms over the centuries. First hydraulic machinery and clockwork, then the telegraph, later the telephone, and now the computer. All these incarnations were at one point compared to the brain based on some characteristics they shared with it, such as the electrical nature of neural communication in the case of the telegraph. The computer is the latest example of the human tendency to try to explain the brain, by analogy, to the leading technology of the time.

Since the mid-twentieth century, the computer has been ever more central to human life and to our ideas about the brain. As neuroscience progressed through the mid-century, members of other disciplines, such as computer science and physics, became interested in the brain, and brought new, computer-inspired approaches to neuroscience. Some sought to frame and understand it as an information-processing machine. Others set out to simulate animal-like intelligence in machines, developing the field of artificial intelligence.

One of the most notable achievements of this new computational approach to neuroscience came from Hodgkin and Huxley. They developed their mathematical model of the action potential in 1952, for which they received the Nobel Prize. Their landmark work heralded the start of the modern era of biological research. Rather than giving simple qualitative descriptions of how biological systems might work, the focus of many biologists now turned to building quantitative models that could predict the behaviour of these systems robustly.

David Marr, considered one of the key founders of computational neuroscience, continued to investigate the brain in this spirit. In his 1982 book, Vision, he laid out a new framework for analysing the brain. He suggested three levels of analysis: the computational, the algorithmic, and the implementational. What, he asked, are the computations the brain must perform to solve its problems? Which algorithm does it use to realise these computations? And how does the biological brain, with its neurons and synapses, implement these algorithms?

As we shall see, this framework would bring an elegant structure to the way a generation of neuroscientists reasoned about the brain. Marr arguably forged a new neuroscientific path that would diverge from Steno’s reductionist approach. Today, computational neuroscientists are trying to capture the essential features of the brain at its different levels, focussed very much on the interactions between its different systems. Such is Marr’s legacy.

The argument for the computational approach

The human brain is the most complex system in the known universe. Eighty-billion neurons, each connected to ten-thousand others. One-thousand trillion synapses, more interconnections than there are stars and planets in the Milky Way. And it all fits into one-thousand two-hundred cubic centimetres of fatty acids. Incidentally, this enigma also happens to live inside our heads, as if the unenviable task of understanding how it works wasn’t hard enough.

It is no surprise then that we have barely scraped the surface of this incredible organ. The progress of neuroscience has been astounding, and yet we still haven’t the foggiest how complex phenomena like vision, language and learning emerge from neural networks. This is a humbling challenge, to say the least, and one that forces debate about how we should approach studying the brain.

In 1961, Barlow argued for a neuron-centric approach. He insisted that to understand how a bird could fly, it was necessary to first understand a bird’s musculature and feathers. This reductionist approach is bottom-up: breaking a very complex problem like understanding the brain into many “simpler” problems, such as understanding the neuron. Reductionists would argue that, assuming brains are just assemblies of cells, then to understand brain function, we simply need to understand every facet of cell function. The intuition here is that principles of brain function will emerge from an understanding of its parts.

Due to the breathtaking progress made in neuroscience and neurophysiology, we now understand much about neurons. We have learned an enormous amount about their electrophysiology, their connectivity, and the neurochemicals that mediate communication between them. But if Barlow was right, why then do we still not understand how the brain works? Perhaps Maxwell Cowen’s subtle critique of reductionism was right: even if we knew everything there is to know about the synapses, the transmitters, the channels, and the response patterns of each cell, we would still be left scratching our heads, wondering how an animal sees and smells and walks.

To illustrate this point, Churchland and Sejnowski call on Selverston’s research on the stomatogastric ganglion of the spiny lobster. This research is legendary in neurobiology, and our grasp of the electrophysiological and anatomical features of the neurons in the network is impressively detailed. And yet we still don’t understand how the lobster’s chewing action emerges from the network.

Perhaps the reductionist approach alone can never unravel the brain’s mysteries. Neurophysiology, and experimental data, are together an indispensable piece of the puzzle. But the computational approach is borne out of a sense that a different kind of analysis is also needed. A kind of analysis that can bridge the gap between the brain’s microlevel constituents — its biochemical and physical architecture — and the macro-level emergent properties that make us who and what we are.

These foundations were laid by David Marr. His rebuttal of Barlow was clear: one cannot understand the brain by studying neurons alone, just as one cannot understand bird flight by studying only feathers. Rather, Marr laid out a new approach to studying the brain. An approach that recognises that complex systems like the brain have multiple levels of organisation. And an approach that seeks to uncover how the different levels interact to give rise to the emergent behaviour we are so puzzled by. Marr insisted that we must ask: What computations are happening inside our heads? Which algorithms execute these computations? How are these algorithms implemented physically in the brain?

Neuroscience often suffers from the criticism that it is “data rich, but theory poor”. With this new kind of approach to studying the brain, it is this poverty of theory that computational neuroscience is trying to address.

Imagine trying to explain how a Chopin nocturne could possibly emerge from a piano. You might take a bottom-up approach, starting with the piano’s mechanics. When the key is struck, it raises the “wippen” mechanism, which forces the jack against the hammer roller. The hammer roller then lifts the lever carrying the hammer. The key also raises the damper; and immediately after the hammer strikes the wire it falls back, allowing the wire to resonate and thus produce sound.

Now, can you hear Frédéric’s Op. 9, №2? Of course not. Because the music does not really reside in the piano’s keys and wires. Rather it lies in the interactions between these parts; the forming of pitch, rhythm, harmony, melody; the tempo, the beat, and the intervals; the octaves, the scales, the modes, and the chords. This is how the beautiful music arises. And only with a well developed theory of music can we describe how a beautiful and complex composition emerges from the piano. So it is with the brain and emergent animal behaviour.

The computational approach to neuroscience is an attempt to develop such general theories of brain function, to bridge the gap between ion channels and vision, synapses and language, neurons and locomotion.

The utility of the computational approach

At the beginning of the computer age, scientists were struck by the parallels between these new machines and the more familiar brain. The legacy of this intuition should not be downplayed. The computational metaphor has helped to inspire entire disciplines that together have furthered our understanding of the brain and contributed to the development of advanced neurotechnology.

Some of those scientists, together forming the field of artificial intelligence, set out to realise animal-like intelligence in computers. Machine learning algorithms, arguably the culmination of this eighty-year endeavour, are today used in many brain-computer interfaces. These systems help to alleviate the symptoms of brain-related disease, and partially restore sight to the blind, hearing to the deaf, and mobility to the immobilised.

Ctrl-labs, a New York-based startup, presides over one such example. The company has developed a wristband that translates neuromuscular signals into machine-interpretable commands. As the motor cortex sends these signals to the muscle fibres in the arms and hands, the device non-invasively captures them. This electrical activity is fed into a deep learning network which can decode intention, and simulate highly precise, dextrous hand movements that can be expressed in an external device in real time, such as a robotic arm, a computer keyboard, or a virtual hand.

Another group of those early scientists were focussed on building computational models of the brain. These models have, in a modest number of cases, shed real light on how the brain produces its higher-level functions. One such example is the work of O’Doherty et al. on temporal difference models.

Many effective AI programs, such as AlphaGo, employ temporal difference learning in order to achieve a goal in a dynamic and complex environment. Temporal difference (TD) learning is an approach to learning how to predict a quantity that depends on future values of a given signal. The name TD derives from its use of changes, or differences, in predictions over successive time steps to drive the learning process.

O’Doherty and his colleagues found that the behaviour of dopamine-producing neurons during learning in humans mirrored exactly what temporal difference models predicted. This provides astonishing evidence that learning in brains involves some kinds of temporal difference algorithms. Marr’s assumption twenty years earlier was that the brain and the computer might employ similar algorithms to produce the same function, regardless of their physical architectures. Some might argue that this study validates his intuitions.

Computational modelling also plays a key role in the development of BCIs and therapies for brain-related disease. Deep brain stimulation, for example, is effective at improving motor symptoms of Parkinson’s disease but it isn’t clear what the mechanism of action is. What we do know is that a loss of dopaminergic input to the basal ganglia thalamo-cortical network leads to the impaired motor function. Implanting electrodes into specific targets within the basal ganglia, and continuously delivering electrical pulses, alleviates the motor symptoms.

This is an incredible achievement, but isn’t it a little disconcerting that we don’t really understand the mechanisms underlying the therapy? Computational neuroscientists are building models to address this anxiety: models that are furthering our understanding of those underlying mechanisms, and helping to drive new therapeutic approaches. As the models advance and become more biologically realistic, there may be scope to further generalise them to optimise stimulation therapies for other applications, such as major depression, for which DBS has thus far yielded little clinical success. Many will hope that in shining a light on these elusive mechanisms of the brain under disease, computational modelling may uncover more general principles of brain function.

How the computational metaphor misses the mark

Some would argue that computational neuroscientists are on a wild goose chase. How can we be sure that the brain even embodies the kind of logical principles we imagine? The danger of a computational metaphor of the brain is that, like any metaphor, it neglects important details. Namely, in this case, the brain’s origin. The computer was designed and engineered relatively recently by the best nerds humanity could muster. The brain, on the other hand, has been constructed incrementally by five-hundred million years of Darwinian evolution. This almost certainly has huge implications for the structure and function of the brain.

Take the recurrent laryngeal nerve, which controls the voice box. In humans, it starts in the brain, then goes down into the chest, where it loops around one of the main arteries and returns upwards to the larynx. No sober engineer would design the system like this — the route the nerve takes is quite ridiculous. And it is even more ridiculous in the case of the giraffe. Like humans, the giraffe has inherited this architecture from fish, in which this route is the most direct one available. But the giraffe has also inconveniently evolved a very long neck. In our spotted friend, the nerve travels all the way down the neck and back up again to reach the larynx, and yet the distance between the nerve’s start and end points is a mere two inches. Needless to say, let us hope for the sake of the giraffe that it never evolves the ability to ponder this design flaw itself.

The point here is that, unlike von Neumann, Turing and the like, Nature cannot simply start from scratch and reconfigure the system with an improved design. At each stage of the brain’s adaptation, modifications are made within the confines of its existing architecture. As such, the computational procedures happening in your brain right now may be quite unlike anything a human would ever design or even conceive of. The brain was not designed, and hence, searching for deep general principles of brain function may just be a fool’s errand.

As we have seen, the prized twins of computational neuroscience — machine learning algorithms and neurocomputational modelling — help us to extract useful insights from heaps of complex neural data, derive control mechanisms from electrical activity, and optimise therapies for brain-related disease. But can they really improve our understanding of how complex effects like vision, language and learning emerge from neural networks?

The future

Perhaps it is fair to say that, thus far, the computational metaphor has made a grander contribution to the engineering, rather than the scientific, challenges of the brain. This is reflected in the extraordinary deep brain stimulation devices mentioned above. These systems build on our inexact science to massively improve the quality of life of those suffering with Parkinson’s. Perhaps in the near future they will offer the same transformative benefits to those suffering from disorders like depression and obsessive-compulsive disorder.

But the science underpinning them is just that: inexact. Incomplete. DBS devices can be likened to banging an old television set to fix a jumpy picture. We know it works, but we don’t understand the underlying mechanisms very well.

The science of unravelling the mysteries of this world is a slog, and there is no reason to think the science of brains should be any different. Indeed, we ought to keep in mind the scale of the challenge before us in the face of the brain’s unrivalled complexity. Progress is likely to be slow and gradual, and if computational neuroscientists are lucky enough to stumble upon the unified theories they seek, this is likely a long way off.

Thus far the computational metaphor has delivered impressive results. It has proven a useful framework with which to guide humanity through the great challenges of neuroscience. But is the computational approach simply an artefact of its time? Will the computational metaphor be replaced by the computer’s successor — the next pinnacle of human technological innovation? Or will it persist and eventually deliver answers about how it is that we are able to walk and see and speak and learn?

Perhaps the destiny of computational neuroscience is the creation of a direct neural interface — the seamless integration of mind and machine. BCIs are loaded with potential to both expand humanity’s intellectual capacity and enrich the human experience of life by orders of magnitude. Just look at how far this technology has already come, with DBS, bionic eyes, cochlear implants and assistive robotic limbs.

Is your brain a computer? Maybe this is the wrong question. Perhaps we should instead be asking, as Anikeeva says: “When can my brain collaborate with a computer?” The final draft of a general BCI may be a system that can both talk in real time to billions of neurons in their natural languages, and communicate back to artificial hardware. Such a system might allow us to piggyback on plasticity and add specialist modules — for example, a pianist module — to our generalist brain. If computational neuroscience takes us in this general direction — even if it wildly misses the mark — it will have made a historically profound contribution to mankind.

As for unravelling the mysteries of the brain, only time, and experimental evidence, will tell if Marr’s intuitions about bird flight were correct. Incidentally, we often marvel enviously at birds as they soar above us. Perhaps we should be content with our own unique repertoire; namely the ability to ponder just what the hell is going on inside our incredible brains.

Written by Callum Messiter, edited by Simon Guekes and Suhela Kapoor, with artwork by Firas Safieddine.

Callum Messiter is a software engineer who finds brains far more fascinating than computers. He is currently working with Gaia Family to make IVF accessible, affordable and individual.

Simon Geukes recently graduated from the University of Amsterdam with a MSc in Cognitive Neurobiology and Clinical Neurophysiology.

Suhela Kapoor is a neurobiologist managing her digital health startup in India.

Firas Safieddine is a Barcelona-based designer, architect, artist, researcher, and neurotech enthusiast.

--

--

NeuroTechX Content Lab
NeuroTechX Content Lab

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.