Yes, the brain is a computer…

No, it’s not a metaphor

Blake Richards
The Spike
23 min readOct 1, 2018

--

One of three silly brain as computer images to come

Neuroscience is a funny discipline which can demand a level of interdisciplinary knowledge that is hard to achieve. At its heart, neuroscience is concerned with understanding the organ that is responsible for generating our behaviour, and thus, it is a branch of physiology and psychology. At the same time, most neuroscientists will have heard, or even used, the words “calculate”, “algorithm”, and “computation” many times in their professional lives. I think there’s a good reason for this: the brain is a computer, and neuroscience is also branch of computer science, in my opinion. However, many neuroscientists do not see it that way.

In online discussions I have often read the phrase “The Brain as a Computer Metaphor”. The implication of this phrase is clear: the brain is not a computer, and at best, we can use computers as a metaphor to understand the brain. (At worst, the “metaphor” does not hold and should be abandoned.) Similarly, I have heard neuroscientists say things like, “neural circuits don’t truly run algorithms”. Again, the conclusion is clear: the brain doesn’t run any algorithms in reality, so our constant use of the words “algorithm” and “computer” when talking about the brain is misguided.

Unfortunately, what these discussions demonstrate is that many researchers do not, as a rule, actually understand the formal definitions of “computer” or “algorithm” as provided by computer science. (Or alternatively, if they do understand them, they don’t accept them for some reason.) If you understand the formal definitions of computer and algorithm as given by computer science, then you know that the brain is very clearly a computer running algorithms, almost trivially so.

My goal in this article is to provide a brief, intuitive explanation for those formal definitions of “computer” and “algorithm”, one pitched at neuroscientists. My hope is that this will help neuroscientists to move away from these confused, misleading discussions about whether or not a computer (the brain) is actually a computer, because as others have noted, it most certainly is.

I want to make clear that my goal here is not to engage with the much larger, more complicated discussions in cognitive science and philosophy of mind as to whether we can understand mentation using computational frameworks. I consider that to be a separate question regarding the nature of the mind, and there, the discussions get much more complicated. I personally think that computer science has a lot to contribute to neuroscience (indeed, as I said above, I consider neuroscience to be a branch of computer science), and I happen to think computational frameworks can explain mental activity, but that’s not my target here. Here, I seek only to articulate why, if we ignore questions regarding the nature of the mind, then according to first principles and formal definitions, the brain necessarily is a computer, and there really is no discussion to be had once you understand and accept the definitions.

A high-school student implementing an algorithm

Defining an algorithm

Okay, what is an “algorithm” and what is a “computer”? The formal definitions for these words originate in the work of mathematicians in the early half of the 20th century. Back then, many mathematicians were concerned with some questions that David Hilbert had put forward in a very famous lecture he delivered in 1900 on 23 Mathematical Problems to be Solved in the Next Century.

One of those problems, the “10th problem”, was concerned with determining whether a polynomial (e.g. 6x³yz⁴ + 4y²z + z - 9) had roots made up solely of integers (e.g. x = 5, y = -3, z = 0, etc.). Hilbert wanted mathematicians to develop an effective method, a “recipe”, for deciding whether this was the case for any arbitrary polynomial. Such a recipe is, colloquially, what we call an “algorithm”. That’s the intuitive definition of algorithm: an algorithm is a finite set of instructions that can be followed mechanically, with no insight required, in order to give some specific output (e.g. an answer to yes/no integer roots) for a specific input (e.g. a specific polynomial like 6x³yz⁴ + 4y²z + z — 9). To put this into context, what was desired at this point in history was a way to specify how any human being could sit down at a desk with a paper and pencil and work out the answer to something without having to actually generate any novel insight. So, for example, long division provides an algorithm for calculating the result of dividing one number by another — you can just sit there with a paper and pencil and follow the finite set of instructions like an automaton, and you will arrive at the correct result. Hilbert assumed there must be an algorithm for the particular task of polynomial integral roots, and the job for mathematicians was simply to find it.

As luck would have it, Hilbert had stumbled onto a very deep problem. Mathematicians took on his challenge, but they had a hard time coming up with a recipe. More and more, various mathematicians began to ask whether some problems in mathematics, including Hilbert’s “10th problem” were, in fact, not decidable, meaning there was no algorithm for solving them. Of course, mathematicians being mathematicians, they desired proofs that an algorithm didn’t exist for these problems. The issue was that with only an informal, intuitive definition of the word “algorithm”, proving that no algorithm existed for a given problem was basically impossible.

Given this, Alonzo Church and Alan Turing set about to develop a formal definition for algorithms. The two researchers worked independently, and came to different solutions, but their solutions turned out to be mathematically equivalent. Church invented lambda calculus, and defined an algorithm as anything that could be done with lambda calculus. Turing invented Turing machines, and defined an algorithm as anything that could be done with Turing machines. I will leave it to the reader to learn more about the specific definitions of lambda calculus and Turing machines, but it is important to understand that both are simply a mathematical tool. This is fairly obvious with lambda calculus, but often people get confused about Turing machines because of the word “machine” — confusingly, Turing machines are not machines. Indeed, the definition assumes some very unrealistic components (like an infinite tape). It is important to think of Turing machines as just a mathematical tool. I draw attention to this because I will return to it when we discuss the relationship between Turing machines and brains.

As mentioned, it turned out that Church and Turing’s definitions were equivalent, meaning that anything which you could do with lambda calculus you could do with a Turing machine. As time has gone on, empirical experience has suggested that any attempt to formalize the intuitive definition of algorithm will settle on something that is equivalent to Turing machines. Therefore, most computer scientists today accept something known as the Church-Turing thesis:

The Church-Turing thesis:

Any algorithm can be implemented by a Turing machine.

It should be recognized that this statement can actually be viewed as either a definition, or a hypothesis. If we view it as a hypothesis, then the Church-Turing thesis hypothesizes that any problem we have a means of solving using only a finite set of instructions and no insight, will turn out to be implementable by a Turing machine, i.e. it hypothesizes that our intuitive definition is covered by Turing’s definition (see e.g. page 183 here). That leaves open the possibility that someday the Church-Turing thesis will be violated. However, Church and Turing were seeking to formalize the concept of “algorithm”. Moreover, the Church-Turing thesis is generally taken as a given by researchers, so when people seek a proof that there is no algorithm for some function, they sometimes do so by proving that you can’t do it with a Turing machine. Therefore, it is safe to say that it has become a definition in computer science, meaning that the formal definition of algorithm is given by the Church-Turing thesis:

The Church-Turing thesis (as definition):

An algorithm is anything that a Turing machine can do.

With the definition of algorithm in hand, we can now define “computable functions” and “computer”. A “computable function” is defined as any task that can be solved using an algorithm. A “computer” is then defined as any physical machinery that can solve computable functions via algorithms. If we think back to the use of the word “computer” at this point in history, this makes sense. “Computers” at this time were people whose job was to sit down with pencil and paper and use an algorithm to solve computable functions (e.g. to integrate some equation). Clearly, these people were computers according to this definition, because they were providing solutions to computable functions via algorithms.

Importantly, there are many processes that consist of a finite set of instructions, but not discrete, step-by-step instructions, and which can still be done with a Turing machine. This is where things get confusing. Consider, for example, an analog audio system. Many people mistakenly think that Turing machines cannot implement analog functions because they are not defined by a discrete set of step-by-step values. But remember, Turing machines are not machines. They’re abstract mathematical constructs that can operate with infinite memory and no time restrictions (any finite amount of time counts, no matter how large). As such, an analog system can be implemented by a Turing machine to any desired level of precision. That’s why we consider all of the functions that audio systems perform to be computable functions (and why we can simulate them well). Of course, it must be recognized that if we require infinite precision for an analog function, then it isn’t computable, but that is rarely a situation we are faced with practically.

Indeed, any procedure for solving a computable function can be implemented by a Turing machine, by definition, no matter the specific mechanism involved. Part of the reason a lot of confusion sets in, even amongst really smart people, is that when you study Turing machines and computable functions in school as a computer science student, the concern is rarely with analog, or stochastic, or distributed processes. Example after example are deterministic step-by-step algorithms that are easy to teach with. That’s good for learning the basics, but it leads many people to mistakenly think that any system which isn’t clearly a step-by-step, discrete system can’t be running algorithms, and that’s simply not true.

That’s not to say that Turing machines can do everything. For example, it has been proven mathematically that there are some problems that Turing machines can’t do, like Hilbert’s 10th problem. But the reason a Turing machine cannot do some things is not because those things are analog, or stochastic, or distributed, etc. It’s because there are certain limits to mechanical, axiomatic methods — some questions simply have no concrete answer within mathematics as we know it.

To summarize the formal definitions:
(1) an algorithm is anything a Turing machine can do,
(2) computable functions are defined as those functions that we have algorithms for,
(3) a computer is anything which physically implements algorithms in order to solve computable functions.

Nature implementing some computable functions

Is everything a computer?

Given these definitions, is the brain a computer? Most certainly, because our brains implement various algorithms to solve computable functions, ranging from the obviously computable (you can mentally sort a list, pick which of two numbers is larger, multiply two numbers, etc.) to the less obviously, but still, computable (you can run, talk, play music, etc.). However, a big issue with this approach, and more broadly with the definitions provided above, is that they can be applied to almost anything. For example, the macroscopic laws of physics are computable functions (e.g. a parabolic path is computable). Therefore, technically, every object in the universe, every planet, every rock, every feather, every snowflake, every grain of sand, etc., can be viewed as a computer, implementing algorithms to solve the functions that describe that object’s evolution through time. That would appear to render the definitions above so broad as to be useless.

In my opinion (and as a subscriber to Wittgenstein’s later philosophies), the way to contend with this problem is to examine our usage of the word “computer”. What do we usually use this word for, and how do computer scientists use this word? Can we generate some mapping between the two that renders our usage non-trivial? Obviously, we are all comfortable with calling the machines we now carry in our pockets and keep on our desks “computers”. That’s how we usually use the word, e.g. when we get a new laptop we say “I just bought a new computer”, rather than, “I just bought a new Von Neumann architecture programmable machine” (which is a reasonable description of what they are). And it’s not just the average person: computer scientists also use the word “computer” to refer to these machines. So, what is it about these machines that makes this usage less trivial than, say, calling every grain of sand a “computer”? Is this usage actually grounded in a scientific, formal language game, or is it just a funny quirk of our everyday, non-scientific language games?

There are very good, formal reasons to use the word “computer” when referring to our laptops and phones. One of the things that Turing proved was that some Turing machines could be used to implement any other Turing machine. These are called universal Turing machines. Now, remember, Turing machines are not actually machines, they’re a formalism, a mathematical tool. A universal Turing machine is a mathematical tool that can be used to perform the functions of any other mathematical tool that is also a Turing machine. Moreover, a mathematical tool that can be proven to be capable of implementing any algorithm is called “Turing complete”. For example, the programming languages familiar to us all (C++, Python, etc.) are Turing complete, because it can be proven that any algorithm can be programmed with these languages.

Given this, we can see why the machines we apply the word “computer” to deserve their name. These machines provide a physical substrate for running algorithms using Turing complete programming languages. This means that, with the right programming, they can solve any computable function for us. (Note: this does not provide a practical guarantee, e.g. it may take billions or trillions of years for any given function to run in any given language. The question of practical computation is addressed by computational complexity theory.) As such, these machines that are now ubiquitous in our lives are a much more powerful form of computer than a stone or a snowflake, which are limited to computing only the functions of physics that apply to their movement. Your laptop is an “uber-computer”, as it were, and therefore, arguably deserves the title “computer” much more than any other object in your house (except maybe you).

A squishy computer

What about brains then?

What does all this business about Turing completeness have to do with brains? As many people have noted, the internal workings of brains are nothing like the internal workings of the Von Neumann architecture machines that we usually call computers. So, surely brains aren’t computers, right? Well, no, brains are also “uber-computers”, they just work in a different manner.

First, we can consider the anecdotal evidence. Consider all the functions that an adult human with sufficient education is capable of implementing. It’s a pretty big list! In fact, with no more than a pencil and piece of paper, any human is arguably capable of running any program that has been programmed in a language like Python. Even without a pencil and paper, you can implement a huge swathe of functions that well outstrips the functions that a stone, a chair or a thermometer, say, can implement.

Second, we can consider the formal evidence. Computer scientists have demonstrated that multilayer and recurrent artificial neural networks are Turing complete, meaning you can implement any algorithm with them. That’s one of the reasons why we can do all sorts of impressive stuff with neural networks! This doesn’t mean that any given neural network can compute any function, though. Rather, it means that for any given function there exists a neural network that can compute it.

Put another way, we can think of neural networks as a sort of programming language: the synaptic connections in a neural network define the functions that they implement, so the set of all possible neural network architectures is effectively a programming language, and this language is Turing complete. Although there are many differences between artificial neural networks and brains, arguably those differences only render real brains more computationally powerful than artificial neural networks. This means that when we consider the set of all possible brains, then for any given computable function, there is probably a hypothetical brain that can solve it. Thus, the “language of brains”, as it were, is probably Turing complete.

Moreover, we can make some stronger statements about individual animals, including of course, humans. Our brains have roughly 10¹⁴ synapses, and there is a surprising amount of flexibility in how those synapses are set. This means that we can implement many, many different functions. Is any individual human Turing complete, then, i.e. can we implement the entire set of computable functions? Probably, if we were immortal, but practically, no. However, the same caveat applies to our laptops and phones. The languages they can be programmed with are Turing complete, but like us, they would require immortality to guarantee that they could implement any computable function. Thus, the set of functions that we can implement as humans is not radically different in scope than the set of functions that our trusty old Von Neumann architecture machines can implement.

Of course, not all animals are as flexible as humans. I worked for many years with the toad species Xenopus laevis, and I can tell you, they’re much more limited in their behavioural repertoire than humans. Nonetheless, they can still do a hell of a lot more than a stone. Any animal can, thanks to their brains. Rodents can replicate spatial probability distributions in their searches and optimize their strategies in oppositional games. C. elegans implement a Markov model in their search behaviour. Because these functions are computable (e.g. solvable with a Turing machine) we can say these animals are running algorithms.

For example, one of the most appropriate uses of the word “algorithm” in recent neuroscience work came from Ortezia et al. They demonstrated that zebrafish embryos use the following recipe for rheotaxis:

  • Determine the change in the gradient of water flow.
  • If the gradient decreases, swim straight.
  • If the gradient increases, turn in the direction of flow field rotation.
A computer for swimming upstream

That’s an algorithm, both intuitively and formally. It’s a finite set of instructions to solve a given function, and it can be implemented with a Turing machine. Of course, the zebrafish probably isn’t engaged in a step-by-step program that looks anything like a computer program on your laptop. It must be using analog sensors, an array of neurons with fluctuating voltages, stochastic transmitter release, etc. Nonetheless, Ortezia et al. are right, the zebrafish is using an algorithm to solve this ecologically relevant problem. That makes the zebrafish embryo’s nervous system a computer.

Hence, when we consider both the formal definitions and the common usage of the word “computer”, it is perfectly reasonable to say that the brain is a computer. Any brain is a sophisticated device capable of implementing a very large set of algorithms to solve many different computable functions. It should be noted that where we draw the line is unclear, e.g. is a single cell a computer? What about DNA? Both can implement more functions than a rock. We have to accept that there is a subjective decision to be made in these borderline cases. The definitions do not say how many algorithms a machine needs to implement to be a computer. But, given that the set of neural networks is Turing complete (and therefore also probably the set of possible brains), it is pretty clear that brains deserve the title “computer”. Therefore, the brain is a computer.

It’s not a metaphor, it’s not an analogy, it’s a fact.

A second hilarious brain as computer image available for download

Objections to the “brain is a computer” argument

When I’ve advanced this argument in public, I’ve encountered a number of objections, some very strong and worth considering, others indicating a basic misunderstanding of the logic. I’ll try to briefly contend with the common objections I’ve encountered here, moving from objections founded on misunderstanding to valid objections that need to be contended with:

Objection 1. Brains are nothing like Turing machines (or laptops), therefore they’re not computers

This objection is borne of a misunderstanding of the definitions above. Nothing in the definition of “algorithm”, “computable function”, or “computer” states that computers must be like Turing machines or Von Neumann architecture machines. A computer need not operate with binary codes, it doesn’t need to use discrete-time processes, it doesn’t need registers or memory banks, or step-by-step instructions — it doesn’t need to be remotely like our laptops of phones for the definitions to apply.

The definitions simply say that algorithms are things that you can do with a Turing machine, and computers are things that run algorithms. Importantly, those definitions do not state any practical limits on how a Turing machine could do it. So, for example, if you have an algorithm that uses analog, continuous processes (like the zebrafish algorithm for swimming upstream probably does), then the Turing machine corresponding to it will have to run for potentially very long in order to approximate that analog process to sufficient precision. But, Turing machines aren’t real machines! They’re mathematical constructs, so there’s no problem with this. Nothing in the definition of “algorithm” says that it must be economical to implement an algorithm with a Turing machine. As long as you are solving computable functions, you’ve got yourself a computer running algorithms. Thus, it doesn’t matter that the brain is nothing like a Turing machine (nothing is, really) or nothing like your laptop. Yes, the brain is a jumble of cells using voltages, neurotransmitters, distributed representations, etc. Yes, it has no programmer, and yes it is shaped by evolution and life experience. None of that matters. As long as we don’t require infinite precision in our description of neural processes (and I see no reason to think we would), then the brain is a computer running algorithms, according to the definitions provided by the Church-Turing thesis.

Objection 2. Just because you can simulate the brain with a Turing machine doesn’t make the brain a computer

This objection is related to the first objection, in that it stems from a failure to understand the definitions and the role of Turing machines in those definitions. The thrust of this objection is that even though we can run a Turing machine for as long as we want in order to implement analog processes, we’re really just simulating those processes with a Turing machine, so we cannot say that they count as something a Turing machine can do. Thus, Turing machine can’t actually do what brains do, so brains aren’t running algorithms.

At the risk of sounding like a broken record, the point that must be returned to here is that Turing machines are not machines. We don’t simulate anything with Turing machines, just as we don’t “simulate” things with lambda calculus. We use Turing machines to mathematically determine whether a particular process counts as an algorithm or not. Therefore, if we can demonstrate mathematically that a given analog process can be implemented by a Turing machine to any arbitrary degree of precision, then we have demonstrated that we have an algorithm. Yes, we simulate many things, including brains, with our Von Neumann machines. But, Von Neumann machines are not Turing machines. It’s important to remember this, because many people fall into the trap of thinking that computers are defined as Turing machines, when they most certainly are not. No physical object is a Turing machine, just as no physical object is an algebra.

Objection 3. Computers cannot explain the mind, therefore the brain is not a computer

This one is tough because it is a thorny philosophical argument, and how one reacts to it depends on one’s philosophical stances. Many philosophers have argued that purely mechanical accounts of the brain cannot account for the mind, and given that the definition of computer provided above is very much mechanical, it is true that if the mind cannot be accounted for by purely mechanical means, then there must be something missing when I say “the brain is a computer”.

Ultimately, as I noted in the introduction, I do not have time to contend with the long debates on this issue in philosophy of mind and cognitive science here. All I will say, is that I personally find all of the philosophical objections to the idea that the mind can arise from mechanical processes unconvincing. Whether it’s Searle’s Chinese Room Argument, Chalmer’s zombies, or Jackson’s Mary the Scientist, I’ve never been persuaded that there’s anything really worthy of serious consideration for neuroscientists in these thought experiments. I’m much more convinced by the works of philosophers like Daniel Dennett and Patricia Churchland, who seem much more clear-eyed and logical to me. But hey, that’s my perspective. If someone rejects the idea that the mind can be reduced to physical mechanism, then I recognize that my arguments above are unconvincing. But, anyone who is comfortable with reductionism, and who understands the definitions above, should also recognize that the brain is a computer by definition.

Objection 4. Brains can implement non-computable functions, therefore the brain is not a computer

This is a very interesting argument that has been advanced by various people. The physicist Roger Penrose has claimed that humans have various insights and behaviours that require non-Turing computation and quantum effects for explanation. Like Dennett, I consider Penrose’s claims questionable. It’s not clear from scientific evidence that human behaviour requires any radical rethinking of computation or physiology.

But, there are other more serious attempts in this vein. Notably, Hava Siegelmann, who provided one of the proofs that recurrent neural networks are Turing complete, also published an article in Science in which she argued that recurrent neural networks can, in fact, be super Turing. This means that they can solve the set of computable functions (as defined by Turing machines) and more, i.e. they can implement algorithms that Turing machines cannot. It’s a very interesting result. I will admit that I find Siegelmann’s proof in that paper hard to understand, so I prefer to remain agnostic as to whether this is an accepted fact. (I will note though, that if Sigelmann’s proof is valid, it calls for a revision of the Church Turing thesis, so I wonder why computer science has not apparently taken this leap.)

But, for argument’s sake, let’s say that Siegelmann’s proof is correct, and that recurrent neural networks are super Turing. That would suggest that brains are also super Turing. If the space of possible brains is super Turing, then someone, someday, could engage in computations that no laptop could ever do. In that case, arguably, the brain is not just a computer, it’s a super computer. This would only make my case that the brain is a computer stronger.

Objection 5. The brain has many constraints placed on it by evolution that would prevent it from being Turing complete

This objection raises some important points. Certainly, we know that evolution can take odd paths to get to certain physiological functions, and brains are likely no exception. Moreover, brains face major energy constraints, both spatial and temporal, and memory capabilities limited by the size of our mnemonic networks. Thus, the chances that the entire set of computable functions could be implemented by any real brain are zero.

All of this is true, but it fails to recognize that many of the same things could be said about our trusty Von Neumann machines. Though we seem to have little concern for our energy usage as a species, the reality is that we cannot devote infinite energy to our digital computers. Moreover, although the cloud is growing rapidly, we still don’t have infinite memory capabilities (at least, not us mere mortals who don’t run big tech companies). As such, your laptop also has many functions that it cannot calculate for you given reasonable energy and memory constraints. That doesn’t stop you from calling your laptop a “computer”, because it is still an incredibly powerful machine for solving computable functions with algorithms, just like our brains.

Of course, the energetic and evolutionary constraints that impinge on brains are very different from the constraints on our Von Neumann machines, and that is also important. As we compile the principles of neural computation in the coming century, some of those principles will surely have to do with the unique constraints that brains face. But, that doesn’t render brains non-computers, it makes them a special kind of computer, one shaped by evolution.

Objection 6. These definitions are useless and potentially misleading

This objection is potentially valid. As I noted, the technical definitions of “computer” and “algorithm” are so abstract that they can be applied to everything in the universe. As I argued above, I think it’s reasonable to restrict their usage to machines, like the brain, that not only solve the functions of physics, but a much larger array of computable functions, potentially even all of them (assuming the space of possible brains is Turing complete). But, what do we achieve by applying the words “computer” and “algorithm” to the brain? How does that help us to understand the brain, given that the formal definitions are mechanism agnostic? If many people do not know these formal definitions, and they think that “computers” are Von Neumann architecture machines and “algorithms” are discrete step-by-step instructions for Von Neumann architecture machines, then we run the risk of grossly misleading these people when we use these words in relation to the brain. My response to this objection is twofold.

First, there are many terms that scientists use formal definitions for, but which many people misuse. For example, the term “significant” means something very specific in statistical hypothesis testing, even though its usage in common language means anything that’s “big”, “meaningful”, or “impactful”. Should we stop using the term “significant”, then? I don’t think so. I think we merely need to be careful to articulate clearly to people what we mean when we use this word.

I think it’s the same with “computer” and “algorithm”. When we use these words, we need to be clear what definitions we’re operating with. If speaking to the lay public, that means highlighting the fact that the brain is not a computer that is anything like their laptops or phones, lest they think otherwise. If speaking to other scientists, it means articulating clearly the formal definitions and asserting only that the brain implements a large array of algorithms to solve many different computable functions.

Second, I think we actually gain something when we say the brain is a computer. There is a rich vein of theories in computer science that are mechanism agnostic, including computability theory, computational complexity theory, and information theory. When we note that the brain is a computer, we are highlighting that these theories are applicable to the brain — which they are! Neuroscientists can, and do, examine how the brain implements various algorithms, how it deals with computationally intractable functions, and how it does things like store, compress or transmit information. All of this scientific activity is useful, and it derives from our understanding that the brain is a computer.

The desire to reject the statement “the brain is a computer” out of fear that it misleads those of us who don’t quite understand or know the definitions is throwing the baby out with the bathwater. Can we not simply work hard at being precise and transparent in our definitions? Need we reject a true statement that feeds various streams of scientific inquiry simply because some people don’t actually understand what it means? This seems overly myopic and pessimistic to me. I think neuroscientists can, and should, use the formal definitions of “algorithm” and “computer” provided by computer science, and simply be careful not to overstate what those words imply.

One last ridiculous brain as computer image for the road…

I shouldn’t count my computers before they halt…

That last argument, that it’s useful to say the “brain is a computer”, is really the key one. For example, if in the distant future of neuroscience it turns out that information theory provides little in the way of helpful insight, then maybe using these abstract definitions and applying them to the brain has been a silly waste of time. Maybe I’ve just been wasting my glucose on banging out this argument. We know the brain works nothing like our laptops or phones, so maybe all I’ve ever done is confuse more people than I’ve helped with this argument.

Obviously, my bet is that this will turn out to be important for understanding the brain. Indeed, as I stated in the introduction, I think that neuroscience is just as much a branch of computer science as it is of physiology and psychology. I am one of a number of researchers who view artificial intelligence and neuroscience as intimately related, and who consider artificial intelligence as key to theoretical inquiry in neuroscience. Perhaps people like me who think this way are just part of a trend that has grown into a bubble due to hype in machine learning. Perhaps… but this line of thinking certainly predates today’s hype in artificial intelligence. Whether it will be important to recognize that the brain is a computer according to the definitions of computer science is something that we cannot fully grok at this moment in history. Time will tell…

--

--