No. The brain is not a computer.

Natesh Ganesh
13 min readOct 9, 2018

--

Overused image of brain as a computer, leading to never ending debates and countless Medium articles.

The debate on whether the brain is a computer or not seems to have died down given the recent success of computer science ideas in both neuroscience and machine learning. I have seen a few recent articles on this subject from scientists, who have made strong claims that the brain is in fact literally a computer, and not just a useful metaphor backed up with their reasons to believe so. One such article is this one by Dr. Blake Richards (and here is another one by Dr. Mark Humphries). I will mainly deal with the first one — a really good and extensive article. I would encourage readers to go through it slowly, and in detail for it provides a good look at how to think about what a computer is, and deals well with a lot of the weaker arguments brought against the ‘brain is a computer’ claim (like the ones here.) Dr. Richards addressed a good variety of objections that people might rise to the claim that “the brain is a computer” towards the end of his article. I will raise an argument here that I feel lies at the heart of this discussion, not addressed in the post and is often overlooked or dismissed as non-existent. The reason I think it is important to discuss this question (and/or objection) in detail is that I strongly believe it affects how we study the brain. Describing the brain like a computer allows for a useful computational picture that has been very successful in the fields of neuroscience and artificial intelligence (specifically the sub-area of machine learning over the recent past). However as an engineer interested in building intelligent systems, I think this view of the brain as a computer is beginning to hurt us in our ability to engineer systems that can efficiently emulate their capabilities over a wide range of tasks.

There are a few minor/major problems (depends on how you look at it) in the definitions used to get to the conclusion that the brain is in fact a computer. Using the definitions put forward in the blog post —

(1) an algorithm is anything a Turing machine can do,
(2) computable functions are defined as those functions that we have algorithms for,
(3) a computer is anything which physically implements algorithms in order to solve computable functions.

Number (3) is the one we will focus on for it is vitally important. To complete those definitions, I will go ahead an introduce from the same blog post, an intuitive definition of algorithm — “ an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9).” And the more technical definition of algorithm in (1) as “ An algorithm is anything that a Turing machine can do.” This equivalence of course arises since attempts to achieve the intuitive definition about following instructions mechanically can always be reduced to a Turing machine. The author of the post recognizes that under this definition, any physical system can be said to be ‘computing’ it’s time evolution function and the meaning of the word loses it’s importance /significance. In order to avoid that, he subscribes to Wittgenstein and suggests that since when we think about modern day computers, we are thinking about machines like our laptops, desktops, phones which achieve extremely powerful and useful computation, we should hence restrict the word computers to these type of systems (hint: the problem is right here!!). Since our brains also achieve the same, we find that our brains are (uber) computers as well (I might be simplifying/shortening the argument, but I believe I have captured it’s essence and will once again recommend reading the complete article here.) Furthermore, he points out that our modern day computers and brains, have the capability of being Turing complete, but are not of course due to physical constraints on memory, time and energy expenditure. And if we do not have a problem with calling our non-Turing complete, von Neumann architecture machines as computers, then we should not let the physical constraints that prevent the brain from being Turing complete stop us from calling it a computer as well. I agree that we should not restrict ourselves to only referring to Turing complete systems as computers, for that is far too restrictive. The term ‘computer’ does have a popular usage and meaning in everyday life that is independent on whether or not the system is Turing complete. It makes a lot more sense to instead refer to those computers that are in fact Turing complete as ‘Turing complete computers’.

Heading back to the problem that arises in Dr. Richard’s arguments. He starts with a definition of computing and algorithms that are too broad to be useful and then ends up narrowing what should be called ‘computers’ based on how the word is now popularly used and their respective usefulness in computing, but without realizing that the definitions themselves have now been narrowed as an unavoidable consequence. Let me explain in detail. Go back to the intuitive definition of an algorithm (remember this is equivalent to the more technical definition)— “an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.” Now if we assume that the input and output states are arbitrary and not specified, then time evolution of any system becomes computing it’s time-evolution function, with the state at every time t becoming the input for the output state at time (t+1), and hence too broad a definition to be useful. If we want to narrow the usage of the word computers to systems like our laptops, desktops, etc., then we are talking about those systems in which the input and output states are arbitrary (you can make Boolean logic work with either physical voltage high or low as Boolean logic zero, as long you find suitable physical implementations) but are clearly specified (voltage low=Boolean logic zero generally in modern day electronics), as in the intuitive definition of an algorithm….with the most important part being that those physical states (and their relationship to the computational variables) are specified by us!!! All the systems that we refer to as modern day computers and want to restrict our usage of the word computers to are in fact our created by us(or our intelligence to be more specific), in which we decide what are the input and output states. Take your calculator for example. If you wanted to calculate the sum of 3 and 5 on it, it is your interpretation of the pressing of the 3,5,+ and = buttons as inputs, and the number that pops up on the LED screen as output is what allows you interpret the time evolution of the system as a computation, and imbues the computational property to the calculator. Physically, nothing about the electron flow through the calculator circuit makes the system evolution computational. This extends to any modern day artificial system we think of as a computer, irrespective of how sophisticated the I/O behavior is. The inputs and output states of an algorithm in computing are specified by us (and we often have agreed upon standards on what these states are eg: voltage lows/highs for Boolean logic lows/highs). If we miss this aspect of computing and then think of our brains as executing algorithms (that produce our intelligence) like computers do, we run into the following -

(1) a computer is anything which physically implements algorithms in order to solve computable functions.

(2) an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output for a specific input.

(3) the specific input and output states in the definition of an algorithm and the arbitrary relationship b/w the physical observables of the system and computational states are specified by us because of our intelligence,which is the result of…wait for it…the execution of an algorithm (in the brain).

Notice the circularity? The process of specifying the inputs and outputs needed in the definition of an algorithm, are themselves defined by an algorithm!! This process is of course a product of our intelligence/ability to learn — you can’t specify the evolution of a physical CMOS gate as a logical NAND if you have not learned what NAND is already, nor capable of learning it in the first place. And any attempt to describe it as an algorithm will always suffer from the circularity.

So one might ask? If the brain is not a computer, how is that we have had tremendous success in AI and neuroscience using algorithms to describe what the brain is doing? Here is the key point — you can view algorithms as computational descriptions of a system. We use specified inputs provided to the brain, it’s output and describe/generate an algorithm (using a neural network connectionist architecture say) that generates the necessary output for every input. Notice that thinking about current learning algorithms as (coarse) descriptions of the brain does not take anything away from our ability to gain knowledge about the brain, and it’s implementation in AI. Just because we do not see the brain as a computer, does not make these descriptions any less useful — we do not have to throw the baby out with the bathwater. But this stance also recognizes that since these are just descriptions that we come up with, the brain as a computer is simply a metaphor and not to be taken literal — Our intelligence (or parts of it) can be described by an algorithm, but this does not make the brain just a computer executing algorithms.

Does this mean that our intelligence has non-computable aspects to it or something non-physical going on? On the non-physical, definitely not. On the question of non-computable? I do not know rigorously, but do not see a reason why there would not be a sufficiently complex (I use complex in a colloquial sense, and not in a computational complexity sense) algorithm which provides a very detailed computational description of human intelligence, that when implemented on a suitable substrate would be indistinguishable from a human in terms of I/O behavior (aka pass the Turing test). But in terms of engineering, I question whether a computational description is the optimal level to describe and realize intelligence in artificial systems (our current machine learning systems are computational descriptions and require tremendous amounts of resources to achieve any type of success). For a whole host of reasons that I will leave for another time, I would argue that for unconventional (non-CMOS) computing substrates (which are studied as possible replacements for CMOS as Moore’s law winds down), it might turn out that non-computational descriptions are better. As part of my PhD research , I worked on thermodynamic conditions/descriptions of intelligence which I think are better suited if you want to build efficient intelligent systems, the type you need for your self-driving cars, smart IoT devices and robots. Under this description, we talk of intelligence in terms of energy dissipation, entropy and macroscopic homeostasis as opposed to inputs, outputs and error minimization. I am scarcely alone in this endeavor. There is a rich history of work connecting information theory, complex systems, intelligence and thermodynamics, and an even brighter future.

Modern day computers in which input and output states are specified by us are referred to as observer-dependent computing, as in their computational property is dependent on the (intelligent) observer aka us. Systems that exhibit intelligent I/O behavior through the use of learning algorithms fall under what we can call as intelligence through computing. Specifically, this is intelligence through observer dependent computing and will thus be observer dependent intelligence. Of course an observer dependent intelligence will have the I/O capability of passing the Turing test, given the nature of the test. The observer dependency is irrelevant from an engineering perspective, if today we had an algorithm that could run at about 20W of power and is indistinguishable from a human in it’s I/O behavior across a wide range of tasks. Unfortunately we do not, prompting one to ask the question if the observer dependency is hindering us? In addition to producing systems that realize reliable observer-dependent computing (pretty much every Si based system), we are also capable of observer-independent computing (I added 1 and 1 to produce 2 in my head while I typed this, and that computing is independent of any external observer’s interpretation of my time evolution). Here is an extensive talk by John Searle on these ideas. Paraphrasing Searle — The entity that produces observer dependent computing is itself not observer dependent. Searle attributes this ability to produce observer independent computing to consciousness. I will refrain from going that far (while that might be the case), but will point out that the only known system capable of human level general intelligence at ridiculous energy efficiencies are humans, and these systems also happen to be capable of observer independent computing and are conscious (these are all probably dependent on each other). So we need to look for those descriptions of intelligence that explain the capability of observer independent computing, and be open to the possibility that descriptions that cannot do so, might never realize energy efficient general intelligence systems that can rival the human brain. Our choice of description of intelligence is important, since this choice affects the design philosophies (top-down vs bottom-up) behind how it might be realized artificially. Thermodynamic descriptions for example, are capable of explaining observer independency and the energy efficiency, and embrace a bottom-up self-organized approach to realization.These ideas are discussed in little more detail in the paper here, for those who have the patience.

Lastly I will address the issue of how the meaning of the word “computers” have evolved over the years from the time of Turing in the 1940s, to how we use the word now. Misunderstanding this evolution and confusing it with computing machinery has further exacerbated the brain-computer discussion. At the time of Turing, ‘computers’ were people, mathematicians who could perform large amounts of mathematical and logical calculations (as portrayed in the hit movie Hidden Figures). The etymology of the word computer adds evidence to this. Computer comes from the latin word “computare” which means to “to calculate, reckon or settle” and then the “er” is from middle age French, a suffix designating an agent normative (eg: a laborer is one who labors)*. Remember that Turing came up with the mathematical construct of Turing machine as a formal abstraction of what a human (computer in his time) would do mechanically with no insight if given an input and sequence of instructions to generate the output i.e. implement an algorithm. On the other hand, ‘computing machinery /machines’ were artificial systems that could perform the same mathematical operations that humans could. Notice that Turing’s seminal paper was called ‘Computing machinery and Intelligence’, and not ‘Computer and Intelligence’, since back then we already called intelligent systems as computers. Turing was interested in whether the same could be achieved in artificial systems that were capable of computation, but were not intelligent (human) computers. Of course over the years, as our engineering capabilities exponentially grew, and with the invention of CMOS devices and the von Neumann architecture, there came a point where these computing machinery became more powerful, faster and more efficient than most human computers when it came to performing mathematical operations, that they started replacing the human computers. And in the process of replacing human computers with respect to these tasks, computing machinery also took up their moniker of ‘computers’ leading to it’s modern usage (without really losing the old moniker of computing machinery). So yeah, brains are computers if you used computers in the 1940s meaning of the word — intelligent systems that were capable of performing computing. In the same vein, you then need to distinguish it from our modern day laptops, desktops, phones, etc which are also capable of computing by calling them as computing machinery as in the 1940s. The confusion arises when you use the term computer both in it’s modern (while referring to these artificial systems) and 1940s (humans) usage at the same time. Since it might be a little too difficult to go back to the 1940s usage of these words, I suggest we stick to the modern day use of computers, and come up with a new term for ‘intelligent systems that are also capable of computing’ like our brain. I am open to suggestions.

I will end my discussion on Dr. Richard’s article by copying him, and implore scientists to be more formal and transparent in their definitions. There is no doubt that computational descriptions of the brain have been/will continue to be useful in both understanding the brain and in AI. But this description does not mean that your brain is itself a computer (as in the formal definition of computers), and the reason why it is not, has nothing to do with the substrate of implementation or the architecture. There are fundamental issues regarding the notion of information, observers, and consciousness that warrant further study using different tools. We should look to use existing computational descriptions as a verification tool as we come up with other levels of useful description of intelligence (the key word here being different levels of description depending on what it is used for ).

I will take a moment to also address a few lines in Dr. Humphries’s article as well. Quoting from that article under the heading ‘Yes, the brain runs algorithms’-

“There are two ways to test if a set of neurons in the brain are approximating an algorithm. Either we can propose an algorithm that fits with the way an animal behaves, and then see if the activity of neurons approximates that algorithm. Or, we can measure the activity of neurons during a behaviour, and then see what algorithm this activity approximates.”

The author himself makes the case above as to why algorithms are computational descriptions. Just because brain dynamics can be described as an algorithm, does not make it the only description nor the brain simply a computer executing algorithms. For eg: a leg-focused description of a lion with 4 legs would be equivalent to that of a table with 4 legs. Now while a lion could be used as a table, does not make it simply a table. There are better, more richer descriptions of a lion that might prove to be more useful in understanding lions and maybe emulating it in an artificial system, than the one which refers to it as a table. The connection to brains, computers and computational descriptions is straightforward. We just need to find other descriptions that might be more useful, especially to realize AI.

In conclusion — No. The brain is not a computer.

*Borrowed from a tweet by Grady Booch.

Note: This first ever post on this platform was a two-hour distraction exercise from dissertation writing. While I might not have been completely rigorous or formal in the arguments and references, feel free to challenge the arguments/demand existing literature and references if needed, and I will provide them.

--

--