Embodiment and the Inner Life (Oxford University Press, 2010)

Are Brains Computers?

Murray Shanahan
8 min readFeb 28, 2020

--

The following is Section 4.3 of my 2010 book “Embodiment and the Inner Life: Cognition and Computation in the Space of Possible Minds” (Oxford University Press, 2010). The book is 10 years old, and covers many topics from many perspectives (cognition, consciousness, neuroscience, AI). But for the record here is my take on this particular question. I still think it’s about right.

Neural computation

The brain is not a computer. [1] That is to say, there is almost nothing about its operation, whether at the level of abstract principle or of underlying substrate, that resembles that of the everyday device we use to send email, to browse the Internet, to store and display photos, to play music, and so on. To begin with, the brain is embodied, while an ordinary computer — that is to say a conventional computer running familiar applications — is disembodied. The brain’s “job” is to control a body and direct its interactions with the physical and social environment. An ordinary computer, by contrast, does not have to navigate the physical environment or manipulate the rich variety of objects it contains, and its severely limited interface with physical reality is through its connection to various static peripherals.

Beside this radical difference of function, there are several fundamental differences in organisation. First, the architecture of a conventional computer comprises an active central processor and a passive memory, while in the brain there is no such division. Second, the behaviour of a computer running a familiar application is governed by a set of explicitly coded instructions written by a team of software engineers. The brain’s dynamics, by contrast, is not programmed but is partly the product of evolution and partly the outcome of adaptation to the environment it finds itself in. Third, the ordinary computer of today is (largely) a serial machine that carries out one operation at a time, while the brain is inherently a massively parallel system. [2]

As well as the differences in function and organisation already cited, there are mathematical considerations that separate brains from conventional computers. In particular, an everyday computer is a digital device, while the brain is an analogue system. A complete description of the instantaneous state of a computer is possible using a finite set of binary (or natural) numbers, abstracting away from the details of its physical instantiation. The brain, on the other hand, is an analogue system. The membrane potential of a neuron (to pick just one physical property) is a continuous quantity, and its exact value is pertinent to predicting the neuron’s behaviour. Theoretically speaking, a complete description of the instantaneous state of a brain is only possible using a set of real numbers — numbers drawn from the continuum, that is. This locates the brain outside the realm of conventional computation from a mathematical point of view, where the realm of conventional computation is defined by the set of functions that can be realised by a Turing machine. [3]

A related point is that ordinary computers are (predominantly) synchronous devices, while the brain’s operation is asynchronous. That is to say, the entire state of a computer advances to its successor when and only when the computer’s centralised clock ticks, so that all its internal events line up in time like a row of soldiers. But events in the brain, such as neuron firings, do not keep time in this orderly way. Although synchronised neural activity is commonplace, there is no centralised clock in the brain to maintain overall temporal discipline. In general, an electrical spike can be emitted by a neuron at any time, where time, of course, is another continuous variable. From a mathematical point of view, this property alone could be enough to push the dynamics of the brain beyond the class of Turing computable functions.

So the brain is very unlike a computer. On the other hand, a computer can be programmed to be very like a brain. For a start, a computer can have an embodied function just as a brain does. It can be programmed to direct the actions of a robot through feedback based control, its input drawn from a variety of sensors, such as cameras or haptic devices, and its output driving effectors such as arms, legs, or drive-wheels. With respect to its organisation, a computer can be programmed to implement a virtual machine whose principles of operation are entirely different from its own. For example, there are many types of parallel computer architecture, all of which can be emulated on a strictly serial machine using time-slicing. The serial computer, so to speak, pretends to be each parallel processor for a short time, doing a little of the work of each in turn. If the serial processor is fast enough, it’s impossible to distinguish an emulated parallel computation from the real thing.

One sort of virtual machine that a conventional computer can implement is a network of artificial neurons. The more biologically faithful the artificial neurons are, the narrower the gap becomes between the virtual machine and the brain. Using the differential equations proposed by Hodgkin and Huxley in the 1950s, for example, the spiking behaviour of real neurons can be modelled very accurately. [4] Moreover, a simulated network of Hodgkin-Huxley neurons can be supplemented with a Hebbian learning rule, such as spike-timing dependent plasticity (STDP), leading to a dynamical system that, like the brain, is not programmed but is open to adaptation to its environment. [5] Although the real computer this dynamical system is implemented on is conventionally organised in terms of an active central processor and a passive memory, this is invisible at the level of the virtual neural substrate, whose organisation, given the computational power to simulate a sufficient number of neurons, can be made to mimic that of a biological brain.

Of course, no virtual machine can transcend the computational limits of the real machine that is its host. A conventional computer can only ever imperfectly model asynchronous events in a system of continuous variables. But this mathematical limitation may be less significant than it at first seems. Consider any dynamical system of continuous variables. Although it is not possible to represent any state (including the initial state) of the system exactly in a conventional digital computer, it is possible to represent it to an arbitrary degree of precision. Likewise, although it’s not possible to simulate an exact trajectory through the system’s state space on a conventional computer, it is possible, over any given interval, to simulate it to an arbitrary degree of precision. So, unless the system in question is chaotic, this means that a simulation in a digital computer can, in theory, be made to match, state for state, its continuous counterpart up to any degree of precision required.

If the system is chaotic — that is to say if a difference in its initial conditions, however small, is amplified over time and becomes arbitrarily large in the limit — then things are not quite so simple. In the chaotic case, because the initial state of the continuous system cannot be represented exactly in a digital computer, imprecisions in the simulation become ever larger over time. However, it is still often possible to simulate typical trajectories through the system, and to extract their statistical properties. So the extent to which the limitations of digital computation are a handicap when it comes to making conventional computers brain-like depends on the role of chaos in neurodynamics. That networks of neurons do indeed exhibit chaotic dynamics is highly likely, as Freeman argued in the 1980s for the example of the olfactory bulb. [6] But it may be the case that functionally equivalent effects — functionally equivalent for the purposes of behaviour and cognition — can be produced in a discrete system that merely simulates such chaotic dynamics.

In sum, although the brain is not like a conventional computer running familiar applications, a conventional computer running the right program can be made very brain-like. Moreover, neural networks can be made to carry out computation. It has been proved that networks of neurons conforming to a variety of mathematical descriptions, including biologically realistic spiking models, can realise any Turing computable function. [7] Indeed, it has been shown that, with continuously valued synaptic weights, networks of certain types of neuron can also compute functions that are impossible to realise on a Turing machine. [8] Nevertheless, we still have not answered an important question. Is it appropriate to describe the operation of the brain in computational terms?

The brain’s dissimilarity from a conventional computer running familiar applications is irrelevant to this question, of course, because we have a more general, more theoretical sense of the concept of computation in mind. We know that neurons can compute. But this too is an irrelevant observation, because it does not entail that mass neuronal activity in the brain is usefully thought of in terms of computation. Moreover, our interest in this question pertains most closely to the architectural blueprint we have sketched out. How should we think of the parallel, specialist processes of the global workspace architecture? Are they computational processes? Or are they better characterised in some other way?

The issue is not metaphysical. We are not in pursuit of a claim of the form “cognition is X” where X might be “computation”. Such philosophically insidious uses of the existential copula are to be banished. We are simply looking for a theoretical vocabulary that has descriptive and explanatory value. From the standpoint of the next section, the behaviour of a set of brain processes is best characterised in terms of their mutual coupling, the trajectories they follow through their combined state space, and the attractors they fall into within that state space. As we shall see, the coupling between processes can be characterised in terms of their influence on one another, which will allow a particular concept of information to be incorporated into the explanatory framework. In the light of the above discussion, what results might be called a computational description. But the allusion would not be to the traditional idea of transforming input representations into output representations, and a less conventional paradigm of computation would be in play.

[1] For a relevant discussion on this theme, see Edelman & Tononi (2000), pp. 47–50.

[2] In fact, there is an increasing trend towards parallelism in contemporary computer engineering, with multi-core processors and highly parallel dedicated graphics processing being the norm. Nevertheless, the parallelism of the biological brain is of an altogether different order and dynamical sophistication.

[3] Siegelmann (2003).

[4] See Izhikevich (2007) for an overview of the Hodgkin-Huxley model and its descendants.

[5] Song, et al. (2000); Caporale & Yang (2008).

[6] Skarda & Freeman (1987).

[7] Siegelmann & Sontag (1995); Maass (1996); Carnell & Richardson (2007).

[8] Siegelmann (2003).

References

Edelman, G.M. & Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. Basic Books.

Izhikevich, E.M. (2007). Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press.

Siegelmann, H.T. (2003). Neural and Super-Turing Computing. Minds and Machines 13, 103–114.

Siegelmann, H.T. & Sontag, E.D. (1995). On the Computational Power of Neural Nets. Journal of Computer and System Sciences 50, 132–150.

Skarda, C.A. & Freeman, W.J. (1987). How Brains Make Chaos in Order to Make Sense of the World. Behavioral and Brain Sciences 10, 161–195.

Song, S., Miller, K.D. & Abbott, L.F. (2000). Competitive Hebbian Learning Through Spike-Timing-Dependent Synaptic Plasticity. Nature Neuroscience 3 (9), 919–926.

--

--

Murray Shanahan

Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind