Five Better questions

George McKee
10 min readFeb 12, 2019

--

Part 7 of Is “Is the brain a computer?” even a good question?

It’s too late to change the answer, and it distracts from the really useful questions about the relations between computers and brains. Nevertheless, a deeper look finds that brains stretch the definition of computing, perhaps beyond the breaking point.

This is part 7 of a series of brief essays (sometimes very brief) on aspects of this question. Part 1 contains the introduction and an index to the whole series.

Five Better questions

We’ve seen that the equivalence of brains and computers, loosely speaking, has become deeply embedded in human cultures, and that from a theoretical perspective, there’s an important threshold in the capacity of computer architectures that is crossed when they achieve universality. So questioning whether “the brain is a computer” is probably both fruitless and misguided, even if we clarify whether by “the brain” we mean just the human brain, or all brainlike neural structructures in all of evolutionary history.. Can we ask better questions in a similar vein? Here are five big questions about brains that I think would be productive. Substantial portions of these questions have already been answered, but it appears that other big pieces of these puzzles remain unknown or without consensus, in contrast to the settled nature of digital computability theory. I’d be pleased to be shown that I’m wrong in this assessment.

1- When in evolutionary history did brains become universal computers?

We can approach this question by looking at the capabilities needed to achieve universality. Universality needs symbols. It needs a way to sequence operations on those symbols and to manipulate those sequences. It needs to be able to store and recall those symbols in long lasting memories whose capacity can be expanded indefinitely. In short, language and culture seem to be prerequisites for brains to provide the kind of computational universality that our computers have had since the the Turing-Church-Post breakthrough of the 1930s. These are capabilities that systems neuroscience is very far from being able to model effectively.

2- Are universal analog control systems even possible?

This question depends on the answer to a precursor question: Is it possible for unaided, illiterate brains to implement symbols? Philosophers and cognitive scientists have asserted that no cognitive processes occur that are not based on symbols, under the banner of “The physical symbol system hypothesis”. It’s time for theoretical neuroscientists to step into this debate. What are the neural correlates of distinct symbols? That is, what kind of neurodynamical phenomena could even keep the neural representations of different symbols separate from each other? In machine learning systems the discriminant weights on the artificial neurons are artificially frozen in order to keep the patterns that they recognize separate. It’s clear that brains do not have an external “stop learning now” switch. But is there some kind of internal switch or dynamical process that freezes the shape of the attractor landscape formed by a fixed symbol system? If such a freezer doesn’t exist, and the boundaries of the portions of the neurodynamic landscape that embody different symbols are eternally fluid, is the notion of a fixed set of symbols to reason with even coherent? What are the neural system (and artificial neural network) properties and evolutionary processes that lead to the emergence of symbols and symbolic operations? If brains do not implement fixed, finite sets of symbols, the entire theoretical framework around what it means to compute something that follows from the Church-Turing thesis, built up over 80 years of work by thousands of researchers, is undermined for biological systems, and brains are not computers at a foundational level.

Brains defy the foundational assumptions of computation in another way: they don’t stop. One of the most fundamental properties of an algorithm is whether it terminates in a finite amount of time. But the activity of brains has neither well-defined start times nor completion times. Brain functioning emerges gradually as the organism progresses through embryogenesis, and stops only as a result of injury, disease, or loss of metabolic resources. The cognitive processing associated with wakefulness does not start or stop cleanly at the hypnopompic and hypnagogic transitions out of and into sleep states. When a brain or other control system implements a well-defined algorithm with a clear beginning and end, those boundaries are not intrinsic to the system, they’re an arbitrary artifact that exists only for the convenience of the researcher.

What would a proof of the existence of universal analog control systems look like?: It would begin with a sufficiently comprehensive mathematical framework for universal analog computation based on the topology of dense, real-valued manifolds, using the foundations developed by mathematicians like Ralph Abraham and Vladimir Arnold. It would then develop structures that would place the Mark I Fire Control Computer and the more general differential analyzers somewhere in a hierarchy of expressive power parallel to the the hierarchy that holds the digital finite state machines and the Turing machines, a hierarchy that would also reach all the way up to universal capability and the ability to emulate other systems, even other universal systems, that comes with universality.

Kay McNulty, Alyse Snyder, and Sis Stump operate the differential analyzer in the basement of the Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia, Pennsylvania, circa 1942–1945. [Image from Wikimedia]

It would then generalize that framework to event-frequency operations like those that are found in neural systems.

Because the fact that brains are grown rather than assembled places important constraints on what is a possible brain, we also need to characterize the space of possible evolutionary-developmental paths that such systems could evolve along to reach universality. Theoretical neuroscientists would need to work with morphologists to validate that the catalog of possibilities includes “brains” of box jellyfish, cephalopods, insects, chordates, etc. This work needs to recognize that when the number of cells in a brain exceeds the number of genes or morphogen molecules that specify that brain’s architecture, the mathematical structures that characterize the catalog of possible brains and the possible behaviors that those brains could exhibit need to be stated in terms of tissues and projection functions between tissues rather than in terms of single cells.

3- What are the possible ways that analog and neural network systems can implement noise reduction and dynamic range expansion?

These would be the equivalents to floating point and multiple precision arithmetic in conventional computers. Do biological neural systems implement any of these methods? If so, how might they have evolved?

Here’s one example: The human auditory system functions effectively with sounds varying in intensity by 13 orders of magnitude. Artificial computers handle enormous ranges like this using floating point arithmetic, which manages a scale factor for values separately from their rough magnitudes. Is there some general process that allows the auditory system to process sounds in a consistent way across all these different intensities, or does intensity normalization just happen as a result of a collection of evolutionarily ad-hoc workarounds?

Here’s another example: attention provides for a “mental zoom” process that allows people to focus on fine details of a cognitive situation, or to expand their perspective out to “the big picture”. As a student of audition and vision I learned about multidimensional Fourier transforms, which can accomplish this kind of scaling via a simple linear shift in transform space. While the evolution of neural connection patterns that perform Fourier transforms seems unlikely, components of the Fast Fourier Transform are not only possible, but have been recognized in the nervous system of the worm C. elegans, and the laws of tissue patterning do not exclude the development of neural connection patterns that implement other types of convolution transforms as well. Datasets from neural connectomics projects may hold some answers.

4- What are the neural mechanisms that cause serially-ordered behavior?

In 1951 Carl Lashley presented a paper titled “The problem of serial order in behavior” that grappled with some obvious facts about behavior that couldn’t be captured by the neural models of the day. Seventy years later, some progress has been made, but we remain very far from solving that problem. While Lashley didn’t even have a theoretical vocabulary to describe the phenomena that he was addressing in abstract, general terms, we can at least now say what we want to explain: how sequences of goal oriented behavior that start and stop differ from unending cyclic functions, how the basic phenomena of universal grammar arise, namely the three R’s of repetition, reordering, and recursion, and how these phenomena combine with short-term variable structure creation, deletion, and modification in a stable long-term context to produce universal computational capability.

In his analysis, Lashley was also sseeming to accept the fallacy of a single solution to “the problem” in a complex, evolved biological system that is under no requirement to produce a single “master seriality pattern” but can adopt a multitude of ad-hoc solutions that each activate in their own contexts. We know that the master pattern can not exist because of Rice’s theorem, which states that any nontrivial semantic property of a computation is undecidable, and the best that any researcher or evolutionary pathway can do is to find partial solutions that work in some cases and may not work in others.

I can name five or more different neuropsychological architectures that lead to serial behavior: (0) coupled oscillators; (1) searchlight memory scanning; like the program counter of a conventional computer; (2) explicit next-step addressing, as was implemented in the IBM 650; (3) associative chaining; and (4) reaction-diffusion spreading activation. Our understanding of some of those architectures has advanced farther than others. Recurrently activated associative chains in computers have been known to be universal since the beginning, but how brains implement the context-sensitive variables that are essential to them remains mysterious. The way that a single network of coupled oscillators can implement different orderings of repetitive behavior is well-understood enough to be explained to school children, as Ian Stewart demonstrates in a Royal Institution video. Yet the conditions in which coupled oscillators can exhibit recursively structured, “fractal” sequences are apparently unknown. The capabilities of the other architectures seem even less well explored.

Once you know how evolutionary and developmental principles might have generated these capabilities, you have the problem of recognizing how it actually happened. Connectome descriptions create marvelous pictures, but they’re static anatomy, and need to be supplemented with dynamical information in order to obtain a functioning model. Even then, a brute-force model that replicates complex behavior gives a very weak understanding of how the phenomenon actually works if it can’t be mapped into a modular description whose parts can be studied, modeled and comprehended individually.

5- What are the neural system (and artificial system) properties and evolutionary processes that lead to goal-oriented activity? How do these results relate to AI work in planning?

Observations of the effects of natural and surgically-created brain lesions, and many studies using other techniques, have made it well-known that essential aspects of the ability to make plans are located in the frontal lobes of human brains. Yet animals without frontal lobes, such as birds and even fishes and octopuses have demonstrated the use of tools in goal-directed behaviors. The universal computational capability of human brains allows plans to be treated as first-class conceptual objects, which can be mentally reasoned about, transformed, and then behaviorally executed. The role of “planner” is even a certified professional skill. Nevertheless, a precise characterization of “plan” that can be applied equally to humans, non-human animals, and to robots and computers, remains elusive.

Researchers in this area should be aware that the notion of a goal to be achieved, commonly called an “intention”, comes perilously close to the substantially different philosophical concept of “intentionality”, which due to limitations in the disciplinary culture and conceptual technology available to philosophers, is plagued by a tangled web of shifting grounds and inconsistent results. Researchers should also beware of projecting intentions onto behaviors that may well have evolved from random variations simply because of the reproductive advantage from their success, without any planning at all. Projecting intentions onto animals and inanimate objects is deeply built into many human cultures, if not all of them, and we have been provided with stories since our childhoods about the goals, triumphs and failures of anything that moves, be it steam locomotives, rabbits, boats with eyes to see through the fog, or gods that drive storms at sea. Even scientific writing about evolution itself is filled with pseudo-intentional statements such as apes evolving bipedalism in order to see over the tall grass in African plains.

By looking at AI systems that do planning, whether those plans are the abstract plans of classic AI planning expressed in languages like STRIPS, or the limb movement plans of robots, or the deep learning artificial neural nets that learn to do their own planning we can be confident that we understand the nature of those plans exhaustively, because we designed every component of them. With a detailed, precise understanding of what it means for a system to possess a plan, we will be able place hard limits on the capabilities of natural brains, and show definitively that complex behaviors such as stone dropping by Conomyrma ants and Aphaenogaster ants using absorbent objects to carry liquid food are not planned intentional acts, because there are no structures in the ants’ brains capable of implementing any kind of plan.

Or maybe ants do make plans: in 1969 Shakey the robot was controlled by a “large” PDP-10 mainframe computer with 192K 36-bit words of memory and a 1 microsecond cycle time, and it contained an implementation of the STRIPS planner. An ant brain contains more than 240K neurons, and each neuron may use thousands of synapses for processing, and the interactions of the graded potentials between nearby synapses happen at microsecond timescales as well. Perhaps there is specialized planning firmware embedded in the ant brain as well. With the beacon of AI planning to keep researchers from getting lost in the details, neuroscientists should be able to find out.

Go on to Part 8

Go back to the Index

--

--

George McKee

Working on projects in cyber security strategy and computational neurophilosophy. Formerly worked at HP Inc. Twitter:@GMcKCypress