Are Biological Brains Made Of Only Discrete Logic?
I’ve come up with perhaps a controversial opinion as to how biological brains work. I am posting this to facilitate more discussion. I have two opinions, the second more surprising than the first. My first opinion is that biological brains, more specifically human brains, are intuition machines. Intuition is that parallel cognitive process that we develop by learning using induction. Said differently, we learn from experience. We can’t just upload knowledge of Kung Fu and instantly master the art. Humans require years of practice, perhaps 10,000 hours to gain mastery of a skill. Anil Seth has the same conclusion, he makes the argument that we are all “beast machines”.
The failure of Good Old Fashioned AI (GOFAI) may precisely be due to the fact that human cognition is not based on logic. Rather, human cognition is a heuristic system that is heavily flawed but can react and adapt extremely rapidly. Rational thought and language are capabilities that are not intrinsic to our cognitive capabilities, but rather are capabilities that our intuitive mind takes an unnatural effort of performing. Pei Wang has been working on an AGI system called NARS that takes this approach of beginning with heuristics rather than formal logic.
Computers are logic machines, computers have several orders of magnitude more capable in performing logic than humans. A simple hand calculator has more arithmetic intelligence than any human alive. Yet, despite all its logic crunching capability, Computers are extremely brittle in their programming. In contrast, a house fly exhibits an order of magnitude more flexibility and adaptability that a supercomputer.
The second opinion is that brains function using discrete computation. Computers also function using discrete computation, that is, machines use a binary collection of NAND or NOR gates. NAND or NOR gates are universal logic components and any universal computer can be constructed with either one of these components. All the evidence about biological neurons point to the fact that their behavior is discrete. That is, synapses fire in discrete events. There is very little evidence that brains are analog systems. That is, unlike an Artificial Neural Network that is informed by continuous mathematics, real neurons don’t work like analog systems.
The conclusion is clear, the brain works more like a computer than like a continuous system like the weather. I am of course not the first person to arrive at this controversial conclusion. Stephen Wolfram has in fact a more far reaching conclusion, that is, all physical phenomena are driven by discrete computation. In his book “A New Kind of Science”, Wolfram explores the idea that “How will science look if computers were discovered before Newton’s calculus?”
Wolfram explains that complexity in nature can be attributed to the computational processing of simple components. There simply is no need for over ornate Byzantine mathematical theories and that the root cause of complexity emerges from simplicity. Wolfram hasn’t developed an air-tight proof of this yet, however it does help to wonder why at the quantum level, matter (and energy) are discrete. The difficulty that Wolfram faces is that there simply does not exist a method to engineer (or learn) solutions using only discrete components.
However, from this perspective of brains being discrete, how does it happen that our brains are more adapted to more continuous based behavior? Why does Deep Learning work so well in approximating biological cognitive behavior when its built on top of continuous mathematics? How can one train discrete systems to learn like Deep Learning systems?
Deep Learning systems have a surprising characteristic that they don’t require high precision arithmetic. This is in stark contrast to computational science workloads that require double precision mathematics. The present trend in Deep Learning is to employ smaller precision mathematics. At present 16 bit floating point precision appears good enough. Google’s first generation Tensor Processing Unit (TPU) used 8 bit fixed precision arithmetic. There is also several research papers that look at binary or ternary based systems. The most well known of this is the XNOR-Net, where a startup (XNOR.ai) was able to raise $2.6m to explore deep learning in small device configurations.
XNOR-nets and their other discrete cousins are not as accurate as higher precision networks. However, they require up to 58 times less memory. You won’t see as much research in this area because their is a discipline bias towards higher accuracy. The discipline still gives a lot of importance to more resource efficient networks. However, the prevailing orthodoxy here is that deep learning are approximations of continuous system. The fact that XNOR-nets ever work at all is glaring evidence that the use of continuous mathematics is more for convenience than for necessity.
Extending one’s research to the extreme, towards discrete systems, is counter the prevailing wisdom. However, what if the prevailing wisdom is entirely wrong? What if deep learning systems should be designed similar to brains? That is, what if deep learning systems should be using only discrete components? What if we get rid of our crutches (i.e. continuous mathematics) and accept the more intractable space of discrete mathematics?
The problem at first glance is that ‘intuition machines’ and ‘discrete computation’ appear conceptually at odds with each other. However, when we speak about ‘intuition machines’, we speak more about the kind of reasoning that is being performed. There is no reason why you can’t program a computer to perform heuristic reasoning. The problem here is that programmers aren’t very good at taking a collection of heuristic rules and build a complex set of rules that avoiding stepping over and invalidating each other.
Our brains simply don’t have the capacity to handle hundreds, much less millions of rules. We can’t program these systems because we just don’t have the mental capacity to program these systems. What we do is we create Machine Learning systems that program these rules for us. Random Forests and its relatives is one example of these rule creation systems. Artificial Neural Networks (i.e. Deep Learning) is another kind but with fuzzier rules. So, intuition and discrete computation are not conceptually at odds. Intuition is computation and doesn’t have to be done using an analog system.
Representing biology using discrete computation is actually not a new thing. Boolean Networks have been used to model biological regulatory processes. One may think of this as a crude approximation of reality, however there is enough research results that has revealed its effective predictive value ( convergence and robustness of) . So this idea of biological brains being made up of gates isn’t an out of this world idea.
To answer the question of this post. No, I don’t think a brain is composed of NOR or NAND gates. I do however think it is composed of similar discrete universal gates most likely of the programmable variety. I am also not saying that it is uniformly one kind. Evolution has a habit of selecting diversity, so it is likely a smorgasbord of discrete gates. What I don’t believe is that brains are made up of analog components (i.e. no evidence that neurons are analog, they either fire or they don’t) or that brains use Quantum effects (as postulated by Penrose without any evidence).
Update: https://openreview.net/pdf?id=S18Su--CW Thermometer encoding resistant to Adversarial attacks.
Explore more in this new book: