A Frame of Mind
To call the brain a machine is to ignore the complex and social nature of human intelligence
by Mark Lee

In 2015, Gary Marcus, professor of psychology and neural science at New York University, wrote an article in the New York Times entitled ‘Face It, Your Brain Is a Computer’. The piece was an attempt to reinforce the theory that the brain is a computer and the mind is the effect of a computation running in the brain. After attacking a range of weak arguments as to why brains are not computers, Marcus concludes that the brain is not a simple algorithm-crunching machine but, he insists, is some kind of computer.
This is just one example of an increasing number of voices arguing that the brain is a machine. This credo is repeated as though it is both established and important. While the statement is fairly meaningless in itself, it is being used, possibly unintentionally, to promote an approach towards artificial intelligence that is unhelpful and dehumanising.
The claim is a vacuous truth that adds nothing either way. Modern science is based on reductionist, mechanistic models, which prove extremely effective for understanding most physics and chemistry, and much biology. So it is not difficult to be persuaded that biological systems might be machines of some kind. If we accept that viruses, bacteria and living cells are miniature machines — as research in biology and medicine has suggested is the case — then whole systems built from these cells are machines too. From this perspective, brains, animals and plants are all machines.
However, this is like saying the sea is made of oxygen and hydrogen, with a bit of sodium chloride. It tells us nothing about the behaviour of waves, the different states and conditions of the sea, the forces involved, or the way that large volumes of those molecules interact with the other entities they influence: the land, the atmosphere and planetary motion. It also ignores human values, such as aesthetics.
But unfortunately the brain-as-machine model is not entirely neutral in effect; it carries misleading implications that have negative consequences. It suggests that we, experienced machine builders, can build one very similar to the human brain, and that the brain is much easier for science to understand than it actually is. Critically, it ignores the role of context, without which human machines cannot exist.
On one level, the machine analogy between brains and computers is compelling. If you place an electrical probe into any one of the transistors inside a computer chip you will detect a series of pulses. The whole thing is built up from nothing more than a few billion identical transistors switching on and off very rapidly. If you now insert a microprobe into the living human brain and make contact with a single neuron, you will also see a series of electrical pulses that appear to be switching on and off, in time with some internal function. The similarity between these two large electrical systems has long fascinated humans and, in particular, scientists. If the brain wiring diagram could be copied and each neuron represented by a transistor or other artificial neuron then, the story goes, we would actually have an electronic digital brain completely identical at the functional level.
All this assumes that scientific reductionism is not only capable of, but sufficient for, cracking this problem. Reductionism allows complicated machines to be built up from components with known individual behaviour. A system’s behaviour can then be deduced from the interactions of its components. However, with very large numbers of components, and particularly where the number of interactions between components is also high, the system may behave in ways and exhibit features that do not exist in any of the components. This is known as the emergence of properties; properties that are new and unexpected and could not be predicted from knowledge of the components. This is where the limits of reductionism become apparent.
Complex system effects tend to occur when large systems contain nonlinearities, lots of feedback loops, fractal structures, or self-organising internals. These are found in artificial neural networks such as deep learning systems, all kinds of living cells, in ecosystems, and in societal systems like economic, financial and other human networks and organisations. And, of course, the brain, being a very large complex system, with its 20 billion non-linear neutrons in the cerebral cortex and the feedback loops from its two trillion interconnections, fits well into this category too. Note that the average number of connections per neuron in the brain is about 1,000. In a computer it is around five.
So, actually it would be really surprising if the brain did not produce emergent behaviour and was easy to understand. As Jack Cohen and Ian Stewart quipped in their entertaining book about chaos theory: “If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”
The second and more important objection is that the reductionist view encourages technologists to concentrate on building a single artificial version of the brain. This entirely ignores the interactions between brains that form an absolutely crucial context for their behaviour. In our very early years, our comprehension, knowledge, skills and other cognitive abilities, depend on the deep interactions of at least one attendant carer. Without parental care we do not develop and thrive. Then, in adult life we form groups, societies and organisations by which we survive and flourish. The societal aspects of human life demonstrate that intelligence is not bounded and contained within individuals but exists across populations and is influenced by the culture of a society. This means artificial intelligence has to face the fact that intelligence is not just a single entity, bounded by the skull, but is also diffuse and requires social interaction and close cooperation.
Human learning takes place through interactions, not by the offline processing of vast quantities of data. This is the difference between biological brains and computer brains. A brain-centric approach to artificial intelligence ignores the fact that human learning requires a body to fully support the life of the brain and the role that this physical interaction plays. Modern robotics is showing how important this is and will be the real test-bed of artificial brains.
All this matters. The machine analogy gives false confidence; it over simplifies the brain, closes off other relevant lines of enquiry and trivialises human beings. So, next time someone says your brain is a machine, you could reply: “So what? My brain only makes sense embedded in the rest of the machine, my body, and you’ll need to authentically duplicate all that, plus a few other people, if you want to model the whole brain in its working environment!”
Mark Lee is a professor in the Department of Computer Science at Aberystwyth University
This article first appeared in the RSA Journal — Issue 2 2018

