How To Find Out If Your Brain Is a Computer

And end dumb arguments about metaphors

Mark Humphries
The Spike
16 min readSep 20, 2018

--

A room full of computers, and their brains.

Stop me if you’ve heard this one before: Your brain is a computer.

This simple metaphor causes raging arguments, intellectual duels, and the throwing of insults. On the one side, those aghast at the idea that the richness, imagination, and ineffability of the human mind could possibly arise from something as prosaic as a machine pumping out a string of 0s and 1s. On the other, those aghast at the aghast, at how anyone could deny the human brain is the most wonderful information processor in the universe, and that computers can do anything they put their mind too.

Arguments over the “brain as computer” metaphor are idiotic. Because it’s not a metaphor: it’s a theory.

The wrong metaphor

If you want to have an argument about the computer metaphor, you first have to define “computer”. Let’s get the easy bit out of the way: what a computer is not.

A computer is not the box on your desk, the tablet on your lap, or the phone in your hand right now. Assemblages of electronics with microchips, RAM, and caches: these are a particular hardware realisation of a computer. If you think the computer metaphor is about a hunk of plastic and sand, and go “aha! The brain is not like a hunk of plastic and sand”, then, well done, you’re right. But not in a useful way. It is self-evident, for example, that the brain does not have a separate chunk of grey matter that acts as a hard drive for long term storage of memories when someone turns off your power (because if someone turns off your power, you’re dead). Some rejectors of the computer metaphor have made this error, confusing the box on their desk with a computer.

Other staunch rejectors argue the computer metaphor is just another in a long line of metaphors describing the brain using the brightest new technology of the time. They reel off a litany of metaphors used to describe the brain over the past few centuries: hydraulics, telegraphs, telephone switchboards — and now computers.

This is dumb. The computer-brain comparison is not a metaphor for technology. The comparison is about the formal definition of a computer. Alan Turing formally defined a computer in 1936. John Von Neumann laid out the architecture for modern electronic computer hardware in 1945. General purpose hardware computers did not exist until the 1950s.

Historians can (and do) argue all day about what was the first official electronic computer. But they all agree that, whatever it was, it was not built before 1936. Indeed Turing’s work was about humans: about capturing our capacity to solve problems, to compute, using a logical sequence of steps (like, say, adding up two numbers) in some formal maths. Computers were defined before the technology we now call a “computer” even existed.

(Did someone say “Babbage”? The Difference Engine was a remarkable calculator, but a calculator nonetheless; the Analytical Engine was never built; both were mechanical. Still the mere existence of the plans for the Analytical Engine helped define the idea of a machine for computing.)

What’s particularly galling is that the metaphor is the wrong way round: the electronic computer is a brain. Von Neumann based some of his ideas for the architecture of the electronic computer on a model of the brain advanced by McCulloch and Pitts. They pointed out that neurons either send their electrical “spike”, or they don’t. So you can think of a neuron as encoding a 1 (meaning “true”) by sending a spike, or a 0 (meaning “false”) when it doesn’t. From this assumption it follows that groups of neurons sending 0s and 1s to each other can make all the formal statements of logic. Which is rather useful for computing stuff. Von Neumann knew McCulloch well, and read their paper; he then used the ideas of encoding 0s and 1s in elements of a circuit, and of how to combine these elements to do logic, in his architecture for a computer. Computer hardware has some foundations in brain science, not the other way round.

(And, just to be clear, this also does not mean that the brain is a computer. Von Neumann was using a simple analogy for how neurons could be interpreted. But this interpretation completely ignores many basic things about neurons. Like neurons having a constant output for example: McCulloch and Pitts’ model just assumed either a 1 or a 0, and nothing else, during a particular computation; but neurons send spikes all the time. Indeed von Neumann noted some of the ways his computer was not like a brain in his original EDVAC report.)

The right computer

So, when we say “the brain is a computer”, what do we mean? What is a computer in this metaphor? A machine for executing an algorithm: a (universal) Turing Machine.

(Pedants corner: this is not the only definition of a computer. But it is the definition that gives an intuitive understanding of how to define such a thing, and obviously fits with the pile of plastic and sand we call a “computer”).

The key ingredients are simple. In this definition, we require only the following. Input, in the form of symbols. Somewhere to write more symbols (Turing imagined an infinite length of paper tape. You could use strawberry liquorice laces at a pinch). A set of instructions — an algorithm — for turning the input symbols into the desired output symbols, by writing as many intermediate symbols as you need.

The algorithm is at the heart of it: a set of steps, that each define a single action. That are discrete: do A then B then C. You can have as many steps as you like. You can have all the loops back and forth between the steps that you like, as in the toddler algorithm

(1) run into wall;

(2) rub head;

(3) repeat from (1);

You can have as many branching options of steps as you like:

(1) if HUNGRY

(1b) buy burrito;

(2) else if THIRSTY

(2b) buy smoothie

(3) else

(3b) “please leave the shop, you’re disturbing the customers”

But the steps are always discrete.

(For the completists, the Church-Turing thesis guarantees that such a machine with an infinite capacity to write symbols will complete any algorithm with a stopping condition in finite time. Assuming you wrote the algorithm correctly in the first place. Which you didn’t).

The “brain as computer” hypothesis

Here’s the crux. The brain does not have discrete steps.

The brain is a system of continuous dynamics. Inside a neuron is a constant diffusion of calcium, potassium, sodium, and chlorine (and innumerable proteins causing changes in its receptors and structure). The resultant small flickers of the voltage across a neuron’s membrane are continual, ongoing. When this flickering voltage reaches a tipping point a neuron will fire a spike. The spike will cause flickers of voltage in the target neurons it reaches. Circuits of neurons endlessly, continuously send spikes to each other. All these things are continuous, not discrete.

Most importantly, the sending of spikes is not discrete. We may argue that a single spike is a discrete thing, a “1” as envisaged by McCulloch and Pitts. But they are not sent in discrete steps; they are generated in continuous time. So neurons communicate in continuous time, not in discrete steps. And how neurons communicate with each other is precisely what we’re arguing about when we talk about brains “computing” or not. For neurons sending spikes to each other is how you walk, see, and smell; think, plan, and do.

But we’ve just defined a computer as something that executes an algorithm with discrete steps. Ergo, the brain is not a computer, right?

Not so fast. Yes, there are trivial ways in which the brain is not a universal Turing machine. The brain does not have unbounded memory. Nor does it have unbounded time in which to carry out any computation. But then, neither does a hardware electronic computer. It may feel like it takes an infinite amount of time to run the algorithm for Windows Update, but it is not, technically speaking, infinite. It will end.

But saying “the brain is a computer” is an hypothesis. It is asking the question: can the sending of spikes between neurons (and all the things affecting the sending of spikes) approximate an algorithm? Or are the dynamics themselves the goal of the brain, and so can’t be described as an algorithm?

Which answer we give has ramifications for how we could understand the brain. If the brain is approximating algorithms, operating as if it were a computer, then we can use all the machinery of our knowledge of algorithms to study it. But if it is not approximating algorithms, then we need a completely different formal approach to understanding the brain, one not based on computing algorithms.

So which is it?

Yes, the brain runs algorithms

There are two ways to test if a set of neurons in the brain are approximating an algorithm. Either we can propose an algorithm that fits with the way an animal behaves, and then see if the activity of neurons approximates that algorithm. Or, we can measure the activity of neurons during a behaviour, and then see what algorithm this activity approximates. We have examples of both. Let’s take an example where we’ve derived an algorithm from behaviour first.

We know a lot about how animals — including us — behave when deciding between two equally dull options. Indeed there is a panoply of experimental tasks where we ask the subject to make a choice between two options based on the evidence available to them. For example, we often show primates (including us) a set of randomly moving dots, within which are embedded a few dots all moving in the same direction — either left, or right. And we ask the primate to decide which direction (left or right) those few coherent dots are moving in. So the primate stares at the screen for a while, watches the dots moving, and eventually makes a decision.

Taking many of these decisions creates specific patterns of reaction times and errors. For example, the number of errors made has a lawful dependence on the proportion of dots that were moving in the same direction — the fewer the dots, the more errors are made in judging their direction. These patterns of times and errors can be reproduced by a simple model in which the evidence that the dots are moving in each direction (left or right) is added up by two counters that compete: evidence for one direction is also used as evidence against the other direction. And these two competing counters turns out to be formally the same as a decision theory algorithm (the sequential probability ratio test).

So we’ve arrived at an algorithm derived from behaviour: what about the brain’s activity during this behaviour? When we look in the brains of monkeys making these decisions, we see neural activity that increases and decreases over time, some activity representing the correct option, which tends to increase; and some activity representing the incorrect option, which tends to decrease. Just like two competing counters of evidence for two options. And Mike Shadlen’s lab have even shown that each jump in activity reflects the amount of evidence available, exactly like the sequential probability ratio test. Here then we see neural activity that closely approximates an algorithm we arrived at from observing behaviour.

(We can even extend the decision algorithm to more than two options, in which case we have the catchily named “multiple sequential probability ratio test”. And this more complex algorithm seems to fit with neural activity in the basal ganglia [read here for way more than you wanted to know about this]).

We also have examples of where we started with the neural activity, and worked out what algorithm they seem to represent. These include a (literally) prize-winning triumph of computational neuroscience: the reward prediction error theory of dopamine. The data came first. In a series of papers, Wolfram Schultz had shown how dopamine neurons fired in response to rewards. A few features were particularly intriguing. Dopamine neurons burst excitedly when unexpected rewards were received. They then “learnt” to instead burst in response to something (like a light flash) that predicted a reward was imminent, and no longer burst in response to the reward itself. And once this link between light flash and reward was learnt, the dopamine neurons stopped firing when the reward was predicted but not received.

Based on these data, two teams (one with Read Montague and Peter Dayan, the other Jim Houk and Andy Barto) independently proposed that dopamine neurons are encoding the reward prediction error used in the algorithms of reinforcement learning theory. These algorithms are equipped with a range of options about what to do in the future, and choose an option based on the predicted value of taking each of them. Once an option is chosen, an error is computed between what was predicted and what turned out to be the actual outcome of choosing that option. This error is then used to update the predicted value of the chosen option: if the outcome was as expected, then no error occurred, and nothing needs changing; if the outcome was better than expected — a positive error — the value of the option increases; if the outcome was worse than expected — a negative error — the value of the option decreases. So this “prediction error” creates a way of turning feedback from the world into changes in behaviour.

The match between the firing of dopamine neurons and this “prediction” error was irresistible. According to Schultz’s data, dopamine neurons signal all three types of error: the absence of error when a reward is predicted, a positive error when a reward is unexpected, and a negative error when expected reward fails to materialise. A seemingly clear match between the discrete step of an algorithm and the activity of some neurons in the brain.

(Well, not quite. Reinforcement learning theory itself was inspired by decades of research on how animals’ behaviour changes as they learn from reward, and then elaborated into how to best train a computer to learn. So in truth we had behaviour -> computational algorithms -> developed far beyond behavioural observations -> then neural activity found that matches steps in these algorithms).

The AI smelters among you may be wondering: what about success of deep neural networks for doing brain-like computation? Like training neural networks to classify images, and then finding units that have properties like neurons in visual cortex? Well, AI-style neural networks are discrete time algorithms at heart. And deep neural networks throw another issue into the mix as they have discrete layers, each feeding their output into the next one along. The brain does not have discrete layers. So the success of AI-style neural networks still leaves open the question of whether these operations can be mapped to the continuous dynamics of the brain.

Another, outre, option here is that while the underlying biology of the brain works in continuous time, the effective operations of the brain are divided into discrete steps. One way this might work is through oscillations in neural activity. Brains are beset by oscillations in their activity, alternating between a short period of activity, and a short period of inactivity. If we were feeling generous, and perhaps had a little bit too much of the old vino, we could interpret each short period of activity as dividing continuous time into discrete windows. So each period of activity is one “step”. There is evidence, for example, of attention oscillating in this way, our ability to mentally engage periodically switching on and off. But oscillations of brain activity are rarely sustained over long periods, and are never nice clean runs of on then off, and on then off. And these oscillation are slow: many things the brain “computes” happen on much faster time-scales. Still, this idea that actually the brain does have discrete steps deserves holding in mind. And then not in mind. Then in mind again. Then not in mind. You can see where I’m going with this.

No, the brains does not run algorithms

So that seems like a lot of evidence that the brain does indeed run algorithms. And this conclusion is baked into a lot of neuroscience. People routinely write that the brain “computes”. David Marr’s highly influential ideas for how to go about understanding the brain divide the problem into finding out the algorithm and then finding the hardware, the bit of brain, that runs the algorithm. There are some who wonder how the brain could do anything if it is not running algorithms.

There’s a simple answer: we already know lots of things brains do that aren’t algorithms.

Take running, walking, or the crawling of a baby. Or of a snail. These are all rhythmic movements, the repeated contractions and relaxations of groups of muscles. While Usain Bolt’s muscles may contract and relax faster than yours, the same patterns of neural activity are driving these contractions in you both. These patterns are repeated bursts then silence from a group of connected neurons, each burst signalling the contraction of a particular set of muscles. The repeated patterns of activity are generated within the circuit of neurons, thanks to the way the neurons are wired together. These circuits self-generate dynamics to controls repeated movements: they are not implementing any algorithm.

Such central pattern generators are found in the brain whenever we find something rhythmic going on in a body (almost: the heart has its own pattern generator). Chewing, for example. Swimming. Flying. Breathing is quite important I hear. All created by circuits of neurons that continuously generate their own activity. No hint of an algorithm, unless “breath in, then breath out” counts.

Singular movements tell us a similar story. An arm reaching for a pint glass or a stick of celery isn’t doing anything rhythmic. But we don’t need to recourse to algorithms to understand where the neural activity is coming from. A set of brief changes in the activity of neurons in the arm part of motor cortex, which talk to neurons in the spine, who contract muscles. What algorithm here?

The usual retort at this point would be: aha! But they’re all movements — surely the richness of memory, of planning, of thought, surely these need something computational, not just “dynamics”?

Here are some solutions involving dynamics.

A solution for memory. We’ve known how memory could be created by pure “dynamics” for decades: a simple memory can be stored and recalled simply by a circuit of neurons that fall into a particular pattern of activity given a particular input. Such networks can turn a partial input into a full memory, like how the smell of burnt toast can evoke a rich memory of a slightly disappointing childhood tea-time.

A solution for doing things with probabilities. Our brains often make use of the probability of something happening, rather than the certainty of it happening. Rewards tend to be uncertain: you may get a promotion if you prove your dedication to the company by signing all reports in your own blood; or you may not. Recent work has shown how a network of neurons that are wired a bit like cortex, and continuously send spikes to each other, can represent probabilities. And, in so doing, can use those probabilities to solve problems. There’s one network that solves Sudoku puzzles (yep, its output needs translating into symbols to do this: but those symbols are ours, not the network’s — it has no idea it’s solving a Sudoku puzzle.)

A solution for doing practically anything with an input. With a name that should really be a manga, Liquid State Machines are a general solution to the problem of “doing stuff with continuous dynamics”. They are a group of randomly-wired together model neurons that continuously send spikes to each other. The important bit is that the network is a mixture of excitatory and inhibitory neurons, with the latter making their target neurons less likely to fire a spike. Why this is important is that the resulting network is almost guaranteed to have chaotic dynamics. And chaotic dynamics give us a very rich playground in which to do stuff.

They mean that the transient change in the network’s activity given some input is very different from that given a slightly different input. This big difference is a non-linear response to inputs, yet can easily be read out by a downstream set of neurons. So, in principle, you can achieve any operation on the input you like. The question is: how would a bit of brain end up (through the unholy trinity of evolution, development, and learning) with exactly the right kind of wiring, and the right kind of read out, to get the desired operation on those inputs? Good question, that needs answering. And people are working on answering it.

There is no answer, yet

There are many who think the brain is not a computer. Esteemed physicist Roger Penrose dedicated two long books to the idea. Yet he somehow leapt from the premise that “the brain is not a computer” to the existence of quantum consciousness without considering there may be something in between. Like what the brain actually does: produces continuous dynamics that may or may not approximate an algorithm.

“The brain is a computer” is not a metaphor. It is an hypothesis, one that can be defined rigorously enough that we can actually test it. And people can. And are. For hypothesising that a bit of brain runs a certain algorithm makes predictions: predictions about how the activity in that bit of brain will change when the world changes (in ways that are relevant to that algorithm); or predictions about how previously unsuspected bits of brain should be involved in a task if the brain is to compute the algorithm.

Such predictions have been made for the hypothesis that dopamine neuron firing is a “prediction error”. For example, we can make predictions about what the inputs to the dopamine neurons should be. Recall that the dopamine neurons stop firing when there is a negative prediction error, the error when a reward was expected but not delivered, and they stop firing exactly when the expected reward was supposed to turn up. To do this, the dopamine neurons would need to receive an inhibitory signal that stops their activity at exactly the time the reward was expected. A decade of work has shown that the lateral habenula seems to provide exactly this timed inhibitory signal to the dopamine neurons. And this is just one of the many possible predictions that could be tested.

No one study will clinch the argument a bit of brain runs algorithm X. Science doesn’t work like that. Support for an hypothesis comes from multiple strands of work, each perhaps weakly supporting the idea, but when weaved together form a giant basket of righteous science (I never claimed to be good at metaphors either). So the answer is not here yet, and any answer will come slowly but inexorably, like a psychopathic slug.

Do I think the brain is a computer? No. I am fully prepared to be totally wrong about that. I have quoted, taught, and even published papers arguing that brains implement algorithms. As I have just shown in all of the above, I can agree with two opposing ideas at the same time. Scientists must often live with such ambiguity: as arguments get polarised into yes/no questions, so it becomes clear that neither position is true. The human mind is fantastic at dealing with ambiguity. Come to think of it, does that mean it’s not a computer, after all?

Want more? Follow us at The Spike

Twitter: @markdhumphries

Further reading:

Paul Cisek’s 1999 paper with an excellent take on another alternative to the brain-as-computer hypothesis: the “interactionist” model

Romain’ Brette’s 2018 paper deconstructing the other pervasive metaphor about the brain: that it “encodes information”

Don’t confuse a “computer” with information processing: information theory is independent of its substrate

Eric Jonas and Konrad Kording doing the reverse metaphor, asking: can we understand a computer processor using the tools of neuroscience? Their paper is here; read my account of their work here.

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”