Why the brain is not like a computer.

Narcis Marincat
Is Consciousness
Published in
10 min readNov 9, 2020

And artificial intelligence is not likely to surpass human intelligence any time soon.

There are two things that I am passionate about: Computers — I’ve been fiddling with them since before I had my first proper drink, and have built, fixed and tampered with more that I can count throughout my decade-long career as an IT engineer; And Neuroscience — my fascination with the mind began when I discovered that I had one, and I have been poking and prodding its undercurrents since then.

And so, I went ahead and studied each subject: I studied Psychology and Neuroscience as my undergrad, and am currently working through my Masters in Computer Science at UCL.

The area where the two fields — IT and Neuroscience— naturally merge is artificial intelligence (AI), where you find such tech as “artificial neurons” that together produce “artificial brains”. You hear a lot of talk in the AI field about general AI, an artificial intelligence that will be able to learn and perform everything that humans can, and is predicted by some to be developed within the next couple of decades. Others suggest that as general AIs will be developed and will continue to improve, it will eventually lead to the AI singularity, an AI that will far surpass human intelligence, even the collective intelligence of all humanity. According to the believers, it’s just a matter of time before technology becomes advanced enough that the singularity will come into existence; Surveys (e.g. 1) reveal that a sizeable portion of IT experts predict it will happen within a century or so.

General AI and singularity proponents also tend to be the people who believe in the computational theory of mind — the idea that the brain fundamentally performs computations, sort of like a computer, and that the mind is like the software running on that computer. Indeed, the whole prediction that general AI and singularity will come into existence rests on the premise that there is no fundamental difference in ability between brain cells and the chips of a computer.

The computational theory of mind has its origins in the 40s and 50s.

But here’s the thing: The computational theory of mind has its origins in the 40s and 50s, when we knew much less about how neurons function than we do now. Fundamentally, artificial neurons are equations that do try to emulate some of the major things that we knew about neurons back then, including:

  • The fact that neurons take a number of input signals from neighbouring neurons and then produce an output that corresponds to either “activation” or “no activation”. If the combined strength of input signals crosses a particular threshold, then the neuron fires. If it does not, then it remains silent. Artificial neurons use mathematical functions that aim to simulate the same kind of behaviour. Within this model, the information processing is thought to be done at the level of networks of neurons.
  • The fact that real neurons are organized in layers; The deep neural networks behind the massive advancements in AI also have multi-layered artificial neurons.
  • The fact that the connections between neurons may carry different “weights”. A stronger connection between two neurons leads one neuron’s activation to bring the second neuron close to the threshold of firing. Artificial neurons have such weights in their artificial connections.

But since the turn of the century, new techniques have allowed us to peer into the lives of biological neurons at unprecedented detail, and although we have much more to discover, some of the things that we’ve learned about the cells which make up our brain makes comparing them with artificial neurons look like the comparison between a Maserati and a drawing of a car made by a 4-year-old.

The difference between an artificial neuron (left) and a real neuron (right)

Amazing new(-ish) facts about neurons

Here are just some of the things that we’ve learned in the past few decades:

  • Neurons don’t just “sum up” their inputs and produce an output when it reaches a certain threshold. As early as the 80s, Kristof Koch and other neuroscientists revealed that dendrites, the branched extensions of the cell, can independently process the signals that they receive from other neurons. And much more recently, the discoveries went one step further when it was found that individual compartments within each dendrite can process information independently (2). To summarize this point, artificial neuronal networks process information at the level of the network. But we now know that when it comes to the actual brain, individual neurons, their dendrites, and even parts of one dendrite can process information independently. That paints biological neurons as much more complex than those of any artificial neuronal network.
A diagram of a biological neuron on the left, and the microscopic image of such a neuron on the right, for good measure. Credits for picture on the right: Dr. Robert Brendan

Also, neurons don’t just communicate using electrochemical impulses, which is what artificial neuronal networks try to imitate. Here are three examples of alternative forms of neuronal communication:

  • It has been found that neurons build tubes between them called microtubules through which they pass various molecules (3). These molecules can in theory lead to a change in the behaviour of individual neurons.
  • A neuron can send and receive sacs filled with lipids, proteins and even RNA molecules to and from other cells. These sacs are a type of extracellular vesicle known as exosomes, and are increasingly seen as playing a vital role not just in neuron-to-neuron communication(4), but also in the communication between neurons and other cells in the central nervous system, such as astrocytes (5).
  • Light has been found to play a potential role in neuron-to-neuron communication, after scientists discovered biophotons in the brain (6).

These forms of communication are not even considered within standard artificial neuronal networks, though they may play a significant role in biological systems.

But biological neurons do much more than communicate with one another. Here are a couple of noteworthy examples of what neurons can do outside of signalling:

  • Biological neurons can self-repair, and also re-establish neuronal networks after damage to the nervous system. That is why you can sometimes see complete recovery following brain injury (7). The precise mechanism behind this self-repair is unclear.
  • Neurons that are implanted into a brain following brain injury know to travel into the part of the brain that is injured in order to help with recovery (8). Here too, the precise mechanism behind this neuronal migration is not well understood.

Are we sure about the computational theory of mind?

Within the computational framework, neurons are seen as nodes that perform mathematical operations, like addition, subtraction, or perhaps even some sort of simple repetition or selection algorithm (e.g. if this then that). In other words, they are seen as a kind of ‘dumb robots’, as Daniel Dennett has been known to call them. Consequently, the self-proclaimed task of computational neuroscience is to:

  • Find out what these mathematical operations are. Are individual neurons just able to perform addition and subtraction? Or are they able to perform more complex operations?
  • Find out where these mathematical operations take place. Do they take place at the level of neuronal networks, individual neurons, synapses and dendrites, or even dendritic spines?
  • Model what we’ve learned about the two points above into a computational model and\or computer simulation to see if our computational theory fits the (very limited) empirical information, especially since it’s much easier to get data from a simulation than from actual brain cells. More on this below.

But it’s important to note here that there’s no real proof that the brain fundamentally ‘performs computations’. We have not discovered a single cell in nature that does calculus. Rather, the analogy between the brain and the computer was just a paradigm that fitted what we knew about neurons over half a century ago, when this theory caught on. Back then, the information we had on what individual neurons do was extremely limited, mostly because it’s incredibly difficult to actually analyse what takes place at level of individual neurons. This is ESPECIALLY true when it came to living human brain cells, which come in extremely short supply for lab testing, even in our day. Case in point: Larkum’s team, which led the study that discovered how even parts of a dendrite can process information independently, would patiently wait for a brain tissue sample collected from epileptic patients during surgery, and when they’d get it, they would then sometimes work for 24 hours straight to collect information on that sample, before the tissue would die out. Neurons don’t last very long outside of the body.

It’s a surprisingly understated fact that gathering scientific information on the workings of living individual nerve cells is an extremely complicated affair, one that we’re still struggling with. Nevertheless, thanks to the relentless pursuit of dedicated scientists like Larkum and his colleagues, we are learning more and more about neurons and their idiosyncrasies, and have learned quite a bit since the 40s and 50s, when the computational theory of mind first emerged.

So what does this all say about the computational theory of mind, general AI and the singularity?

The computational theory of mind might have been very plausible back in the 50s, when neurons could be seen as ‘dumb’, but it doesn’t seem to fit the current findings in neuroscience very well, which paint neurons as highly complex entities. In fact, the only constant that seems to have remained standing across the history of neuroscience is that the more our neuroscientific tools evolve and allow us to analyse the activity of individual neurons, the more complex neurons reveal themselves to be. And as much as we’d like to fit the new data coming out about neuronal activity into the computational framework, it’s looking to be more and more like trying to fit a square peg into a round hole. Perhaps the computational theory of mind is not the way to go — indeed, if it were, we would have probably solved the mystery of consciousness a long time ago.

Rather than describing them as ‘dumb’ nodes that perform mathematical operations, perhaps a better way to describe neurons would be to say that they are ALIVE. In other words, that they are living, complex beings. After all, they eat, work, rest, communicate in complex and dynamic ways. Such a view would not be surprising to someone who regularly works with living cells, since even the simplest cells in nature have shown themselves capable of performing quite complicated tasks that require memorization, problem solving, and dynamic responses to their environment. The testate amoeba for example, a single-cell organism widely regarded as one of the most simple lifeforms in nature, can build a shell-like structure around itself out of different materials for shelter and protection. And since human neurons are widely considered to be some of the most complex cells found in nature, can they be any less extraordinary? It’s just that amoebas and other single-celled organisms are much easier to observe in a lab than neurons are, so it’s taking us much more time to recognize the complexity of the latter.

And if the computational theory of mind with the idea of neurons as ‘dumb nodes’ is false, then all that talk about general AI and the singularity will turn out to be a red herring, since it rests on the idea that artificial neurons are at least moderately similar to biological neurons.

All evidence points to the fact that current artificial neurons are not even close to the complexity of actual neurons, and perhaps may never be. In fact, it looks like to create the kind of intelligence that humans have, you need about 86 billion interdependent, complex entities (i.e the number of neurons in the human brain) that are specialized in creating thought-related information , rather than billions or trillions of transistors. That is a very different architecture indeed. And even that may not be the whole story, since the brain contains other kinds of cells that have proven themselves to be involved in intelligence. For example, glial cells are also part of the nervous system, and are thought to provide different types of support for neurons. But when glial cells were taken from human brains and transplanted into mice brains, the mice were better and problem solving and memorization. Does this mean that glial cells are involved in cognition? There are about as many glial cells as there are neurons in the human brain, so the answer to this question matters a great deal, especially for the computational theory of mind.

Conclusion

As we’ve mentioned prior in our series on the history of consciousness, one of the rules of thumb for when the concept of consciousness experienced an upgrade historically was when our understanding of the body changed, in particular of the brain. For example, at the end of the 19th century, scientists reached the conclusion that the brain was fundamentally made of cells, and that changed our entire view of what consciousness is — whereas before it arose from the brain, now it arose from brain cells.

We may be due for another upgrade in the concept of consciousness, considering all of this new information on the complexity of individual neurons. So far, the full range of scientific explanations for consciousness tended to rest on the premise that neurons were ‘dumb’ cells that performed computations. But as that idea starts to fly in the face of the scientific evidence, how will our theories of consciousness change? We’ll be exploring that next in Is Consciousness.

--

--

Narcis Marincat
Is Consciousness

Psychology, Neuroscience & CompSci graduate (UCL & Royal Holloway). Interested in consciousness, AI, philosophy, sociology & cyberpsychology, or mind+tech.