Your “Inner” Turing Machine?

How the Brain is (or isn’t) “Just” a Computer

Edoardo Contente
NeuroCollege
11 min readJul 29, 2021

--

It is not uncommon to hear the quote that “The human brain is the most powerful machine in the Universe!”. The idea of identifying this “powerful” biological machine with the informal concept of computation, that is, applying a set of operations on something and transforming it into something else, dates since the advent of science fiction, but, unsurprisingly, started to be formally explored in the 1940’s after Alan Turing, John von Neumann and Alonzo Church laid the foundations of the field of Computer Science. We are far from understanding the inner workings of such intriguing wetware¹, in order to integrate it with all our other “ware” through the use of BCIs, but an analysis of the existing theoretical foundations of computation might help clear this brainy conundrum.

What can the brain compute?

Even if one disagrees with the first statement presented, it is useful to try to compare the brain to the theoretical notion of a computer. The obvious attempt is to liken the cerebrum, the largest part of the nervous system and responsible for all “higher-level” functions (including some motor skills and memory²), to the best fundamental model we have of a computing machine: the Turing Machine.

A Turing Machine is composed of an abstract, infinite piece of tape with finite symbols written on it (for practical purposes it might as well be “a really really long tape”) and endowed with the simplest conceivable processor: a “head” that is able to move left and right on the tape, reading and possibly rewriting a finite set of symbols. The only missing piece is the set of instructions, what is commonly called the “code” in most computer languages’ syntax. For example, a set of instructions could make the machine repeat a pattern, ‘neuron’, ‘brain’, ‘neuron’, ‘brain’, ’polygon’. Together with some already written input (the initial conditions, say, a tape covered with ‘brain’) and an instruction for the machine to stop (e.g. read ‘neuron’), we have our complete computational model.

An example of a Turing Machine performing its operations guided by a set of symbols which indicate the instructions to follow.
An example of a simple Turing Machine that reads a tape filled with ‘brain’ and writes a repeated pattern until it halts by reading a ‘neuron’. CC BY-NC-SA

Such a connection is not old. McCulloch and Pitt presented a paper in 1943 which introduced a mathematical model aimed at mapping “propositional” (human) logical inferences into the functioning of a Turing Machine³. However outdated these earlier attempts may be, the “Turing Model” is still celebrated, not because it offers an irrefutable mathematical proof that it should be used as the basis of computation, but because it is irresistibly simple and yet, empirically, is able to simulate any algorithm implemented on any computational device ever built (and possibly that will ever be built). This statement is encoded in the Church-Turing Thesis:

Every algorithm is implementable on a Turing Machine.A Turing Machine can compute what any other computer does.

To clarify, an algorithm is the unit of computation — a set of physical rules that yield some output from some input, possibly a simple “yes” or “no” answer to some question. A “decidable” (or sometimes “computable”) problem is a problem that yields an answer in finite time (to be clear, a million years is also a valid time-span — a non-decidable problem truly takes infinite time to yield an answer for any input) on some machine. Interestingly, there are algorithms which are provably non-decidable, such as answering the question “Will an arbitrary Turing Machine ever stop, or will it halt at some point?”³.

As von Neumann writes in “The Computer and the Brain”, a compilation of his (last) 1955 Silliman lectures at Yale University⁴,

“the first machine [Turing’s] can be caused to imitate the behavior of any other.”

That means that no matter how much we complicate our computational model, by adding thousands of tapes, or building a circuit, or even concatenating neurons, there is no special computable/ decidable problem on another machine that is not decidable on a Turing Machine. Although there is no existing proof that the Church-Turing Thesis holds for any possible constructible machine, no computer ever built has been observed to violate the Church-Turing Thesis, which is a strong indicator that our current understanding of physics does not allow for “better” computation. There is also a stronger version of the Church-Turing Thesis, which purports that:

Every algorithm is efficiently implementable on a probabilistic Turing Machine.

A “probabilistic Turing Machine” is simply a Turing Machine, but instead of following deterministic rules, i.e. computed with 100% certainty, there is a “perfect die” which gives the associated probability of performing a specific instruction.

The question of whether the brain can compute more than a Turing Machine is usually misdirected. For example, it is sometimes suggested that the brain uses “true randomness” given by its biological processes, which is supposed to confer some additional intrinsic computational capability⁵. This is a misconception, since even if the brain can be considered a “probabilistic machine”, by the (empirically tested) Church-Turing Thesis, this does not confer inherent computable advantage over any other machine. If it would so, then all we needed to do was find a counter-example to the Church-Turing Thesis, that is, a special algorithm that is implemented in the brain and that can solve more problems than in any other machine.

A final alternative might be that the brain implements some “purely” quantum algorithms. According to Matthew Fischer, the brain might exploit the counter-intuitive features of quantum entanglement (fundamental correlations among particles) through Posner Molecules (calcium-phosphate complex ions), robust enough to preserve the delicate entanglement until the molecules bind to the neuron’s membranes⁶. Much debate surrounds the hypothetical implementation of quantum protocols in the cerebrum (which would prove relevant to obtain a full picture of neuro-computation). Although Quantum Computers (computers that have the ability to implement quantum algorithms) have been shown to violate the Strong Church-Turing Thesis, that is, they have an inherent advantage in efficiency, they are not able to implement algorithms that compute more than a Turing Machine, so they still preserve the original Church-Turing Thesis⁷.

You might not be satisfied… After all, the brain is different, isn’t it? A Turing machine cannot possibly be conscious, can it? Of course it cannot. You are right in the sense that a Turing Machine fails at embodying the computational concept that allows for higher-level, abstract (more commonly deemed emergent) processes.

However, such lack of description does not mean that the brain “can” compute any more that what a Turing Machine can, but the efficiency of our brains’ algorithms might be deceiving. It is insightful to know what a brain can fundamentally compute. The answer is (employing Turing’s brilliance): not more than whatever your smartphone can.

But would the brain really be implementing “algorithms” then? If one takes the definition of an algorithm, the brain does accept certain inputs coming from its environment and manipulates it according to a set of biological rules. However, it seems that depending on the inputs that it receives, it can modify the “set of instructions” that specifies how the inputs themselves are treated. But does it really matter what we call it? We can just accept the semantic diversity, make Wittgenstein happy⁸ and move on with our computational lives.

How well can the brain compute what it does compute?

The informal concept of “how well” a certain algorithm performs when compared to other algorithms implemented on some machine is related with the before-mentioned idea of computational efficiency — which is a measure of how much the resources used by a certain algorithm grow in response to an expanding growth in the size of the input. With that in mind, one problem can be considered more complex than another problem if the best current algorithm to solve that problem requires more space and time (on average, over a randomized input) to solve it — i.e. if it is inefficient.

To understand the (still) unbeatable efficiency of evolutionary computational development, it is good to know some figures about how the brain racks up, in terms of its “computational power” (the quantity of resources available to the computer), against modern-day supercomputers. A computer that will have more efficient algorithms for certain tasks, albeit being less powerful, will still outperform or be of comparable performance to a really powerful computer with poorer algorithms.

Today’s fastest supercomputer, according to TOP500⁹, is Fugaku, from the Riken Center for Computational Science, in Japan. It can process an average of 4.42×10¹⁷ FLOPS (“floating-point operations” per second, that is, the total number of arithmetic operations using real numbers per second) measured while the computer solves a linear algebra problem (the standard way to measure the maximum number of FLOPS). Its top performance, sits at a banishing 1.42×10¹⁸ FLOPS or ~1 EXAFLOP) and, on average, it operates at an absolute power consumption of approximately 2.9×10⁷ kW (the equivalent of having 12 thousand ovens on at “medium” heat).

An even more relevant measure is the relative power used (its power efficiency). While Fugaku works at 1.48×10¹⁰ FLOPS per Watt, the most power efficient is the MN-Con Server, Xenon Platinum 8260M at 2.97×10¹⁰ FLOPS per Watt (while ranking only 335th in terms of speed). With the brain running on an average of 20W, its estimated 8.6×10¹³ synapses have a power efficiency of 4.3×10¹² synapses per Watt, about three hundred times better than today’s fastest supercomputer¹⁰.

Finding an equivalent measure to “FLOPS” in the human brain is not unanimous in the literature. The problem lies partially in finding an acceptable unit of computation in the brain, when the final answer probably involves different levels of abstraction¹¹. Even if one naïvely considers synapses the building blocks of neuro-computation, since each neuron can have a variable number of dendrites associated (from 10³ to 10⁴) it is hard to precisely estimate the number of synapses in the brain per second without knowing the total number of neurons, the number of synapses for each individual neuron and the average firing rate¹². But not only is this power measure somewhat primitive, neurons might not be really “dumb” at all. Each one of them might hold features that make it an interdependent processor, capable of executing different types of gates, the basic units computational logic (such as, turning a signal that means “no”, into a signal that means “yes”)¹³.

Since such logic gates are implemented by conjugating signals coming from different dendrites (similarly to how logic gates are implemented on a digital computer by connecting a set of wires to a set of connected transistors) a better measure of computational “potential” might be given by the number of dendrites in the brain. Because each neuron has about 5000 dendrites, this makes the number of logic operations comparable to the number of transistors in Fugaku. However, the precise connections made are what ends up determining the complexity of the network.

In 1955, von Neumann had already expressed that the research at the time pointed to “a much more complicated mechanism than the dogmatic description in terms of stimulus-response, following the simple pattern of elementary logical operations” that needed to be expressed in neurons. One theory that tries to account for this is the Theory of Connectivity¹⁴, which affirms that the brain is organized according to cliques, subsets of several highly interconnected, clustered with other cliques and which take (at least part of) the same set of input signals, but in different order, allowing for ever-increasing hierarchy due to such “functional” clustering (or, as the authors call it, “specific to general assembly”). They even purport that the number of cliques in each cluster is given by the simple combination formula

where i is the total number of inputs of the cluster. Some empirical evidence has been gathered in favor of this theory using tetrode recordings from mice.

Evidence that the brain has little to do with common computers often arises when one is requested to perform a simple arithmetic operation, such as 37 691 squared. You could surely do it, but it would take a lot of time — unless you are, in fact, Arthur Benjamin, who takes only a couple of seconds by utilizing some “Mathemagics”. Is he more of a digital computer than you? In his own words “While I do this calculation you might hear certain words, as opposed to numbers (…) This is a phonetic code, a mnemonic device that I use that allows me to convert numbers into words.”¹⁵. There are no explicit “arithmetic” components to the brain, as there are in nearly every computer. Our brains tend to be very good at symbolic manipulation, abstraction and making operations that are meaningful only at that level. Our brains can identify a face with far less resources than the world’s fastest supercomputer, and in quite comparable time. The secret lies in the abstractions, the efficiency of the algorithms that are implemented and not necessarily on the physical substrate. Emergence abounds and from it, every complex algorithm flourishes, even what we deem as “consciousness”.

What is “missing” from the picture we have of the brain?

The endeavour of precisely understanding the brain’s computational capabilities is pivotal in the development of adequate BCI technology. The blind goal of accurately measuring different neurons can be likened to trying to simulate the behavior of a sea by tracking each individual molecule, instead of understanding fluid dynamics and talking about pressure, temperature or even not yet fully understood concepts like turbulence, which curiously might play a big role in the dynamics of the brain¹⁶. Grasping the level of complexity of the algorithms implemented in the brain is a promising approach to figuring out what efficient algorithms might be implemented and what confers the brain its current universal status of the most “efficient” computer.

[1]: “Definition: Wetware.” 2021. Merriam Webster Dictionary. 2021.

[2]: The cerebrum comprises the cortex and its underlying layers. For more information, refer to Rio McLellan’s article here.

[3]: Piccinini, Gualtiero. 2004. “The First Computational Theory of Mind and Brain: A Close Look at McCulloch and Pitts’s ‘Logical Calculus of Ideas Immanent in Nervous Activity.’” Synthese 141 (2): 175–215.

[4]: Neumann, John von. 1958. The Computer and the Brain. Annals of the History of Computing. Vol. 11. Yale University Press.

[5]: Friedman, Andy. 2002. “The Fundamental Distinction between Brains and Turing Machines.” Berkeley Scientific.

[6]: Fisher, Matthew P.A. 2015. “Quantum Cognition: The Possibility of Processing with Nuclear Spins in the Brain.” Annals of Physics 362: 593–602.

[7]: Chuang, Isaac, and Michael Nielsen. 2000. Quantum Computation and Quantum Information. Cambride University Press.

[8]: Wittgenstein, Ludwig. 1922. Tractatus Logico-Philosophicus.

[9]: “Top 500.” 2021.

[10]: “Top 500 Power Efficiency.” 2021.

[11]: Shanahan, Murray. 2019. “Whole Brain Emulation.” The Technological Singularity.

[12]: “Scale of the Human Brain.” n.d.

[13]: Gidon, Albert, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, and Matthew Evan Larkum. 2020. “Dendritic Action Potentials and Computation in Human Layer 2/3 Cortical Neurons.” Science 367 (6473): 83–87.

[14]: Li, Meng, Jun Liu, and Joe Z Tsien. 2016. “Theory of Connectivity : Nature and Nurture of Cell Assemblies and Cognitive Computation” 10 (April): 1–8.

[15]: Benjamin, Arthur. 2013. “Faster than a Calculator | Arthur Benjamin | TEDxOxford.” 2013.

[16]: Deco, Gustavo, and Morten L. Kringelbach. 2020. “Turbulent-like Dynamics in the Human Brain.” Cell Reports 33 (10): 21.

#Minor edits were made on 08/04/21.

Written by a Neuro enthusiast, but fundamentally Physics geek.

--

--