Gödel and Human Consciousness
Is the mind a closed system?
The human mind is a strange thing. It is not only capable of thinking, it is capable of thinking about itself thinking. In order to understand ourselves we must become an observer of ourselves; and all of us do this every day. The full range of this self-aware behavior seems to be possible only in humans. To a much lesser degree chimpanzees and orangutans also seem to experience a certain level of self-awareness, but the human is unique. The human will not only self-observe but also innovate based upon research, modify habitual behavior based upon experience, write and speak in order to influence and lie in order to deceive. Only the human mind is capable of all of these astounding things.
Some believe that these astounding things happen as a result of very clever evolved algorithms executed by the brain. Others believe that, due to the revelations of Kurt Gödel, no conceivable collection of algorithms can possibly manifest human self-aware consciousness. In this essay, I discuss these two views and propose a third that may actually explain more than merely the mind.
Before going into some thoughts regarding Gödel’s Incompleteness Theorems, I need to provide an example of the Aharonov-Bohm effect. What Drs. Aharonov and Bohm discovered was that, under the right circumstances, an electrically charged particle traveling through an electromagnetic potential will be diverted even when both the electric and magnetic fields are reduced to zero. After I read about this in Scientific American, I summarized it to my spouse like this: “If something isn’t there; but it may as well have been, that is enough.”
I have used that claim as a quantum mechanical excuse for all manner of tardiness and for taking the last donut more times than I can count. My summary appears to be a satisfactory explanation of the effect only to a layman without a physics background. It misses the far more interesting quantum mechanical implications of the finding which reveal how an electromagnetic potential couples with the complex phase of a charged particle’s wave function. That doesn’t stop me using my summary whenever it serves.
This isn’t the only case where a truly interesting scientific understanding may be easily summarized for the average listener in a way that loses the essential meaning. I shamelessly explained thermodynamics to my infant children as follows:
- The first law: “Everything is somewhere.”
- The second law: “Everything falls apart.”
- The third law: “When it’s cold enough everything stops.”
While these descriptions are hideously inadequate, my six year old daughter nonetheless disputed her friend’s claim that we would have our Toyota Corolla forever by explaining that we couldn’t have it forever because the second law of thermodynamics would forbid that. Thus, I argue that she understood the essence of the law.
We average humans remember complex principles with simple mnemonics of this sort. In a conversation with a friend at Sun Microsystems regarding the Common Information Model (CIM), we were discussing how the model was becoming fantastically complicated. The goal of the project was to provide a detailed description for all of the parts that made up an informational resource. My friend said that they were modeling not only the magnetic disk storing the data, but the actual voice-coil driving the arm holding the data read/write head. This level of rigor was becoming a problem. “Eventually,” he said, “they’ll be modeling the wires in that voice coil.”
Listening to him, I recalled Russell’s Paradox. I responded with reference to that text. “You know,” I responded, “a closed system may be either complete or consistent. It cannot be both.” This is a piteous approximation of Russell’s understanding, but was actually meaningful in this case. The CIM folks were trying to emulate everything within an information model. It was not a “closed system” in the mathematical sense but it was close enough that the observation was useful. In fact, there was no reason to model all of the little pieces of the information model. More discrimination was needed to assure that only those components that were crucial to understanding the model were needed. The CIM team had become obsessed with modeling everything, even the parts that provided no real value to an outside observer.
There are highly complex principles which may be explained in fairly simple ways. Russell’s Paradox, described in 1901 in response to Gottlob Frege’s attempts to demonstrate the pure consistency of mathematics, is typically explained using this analogy:
A librarian decided it was necessary to index all of the books in the library. That would mean taking a blank book and entering into it the title and location of every book in the library. Having accomplished that goal, the librarian proudly displayed the index in a place of honor near the librarian’s desk.
Suddenly, the librarian realized that the book, just created, was not recorded in the index of books even though it, itself, was now a book in the library. Now the librarian had a choice: would the index be entered into the index (which made no sense since someone looking in the index already had the index in hand) or would the librarian create a new index: one which contained all the books in the library including the index. If that were the choice, of course, the librarian would then need to create yet another index which contained the books in the library and also the two indexes just created and on and on forever.
Consistent with a general understanding of Russell’s thesis we see that a simple system, like a library, may be cataloged; but, such a catalog may be either complete (the index itself is placed into the index) or consistent (the index is established and then placed into a further index for ever and ever without stopping). Russell shows that apparently straightforward systems may inevitably introduce untidy dilemmas. He indicates that in a number of seemingly well-founded systems of knowledge, you have to make a choice: you may be either complete or consistent.
Russell demonstrated that Frege’s effort to prove that there was a complete and logically consistent explanation of set theory was built upon sand. Mathematics was not fully explained by its own self-contained elements and rules. Russell later tried to solve the problem in a number of ways, eventually teaming up with Alfred North Whitehead in an attempt to use logic to prove the basic rules of mathematics. It was a massive undertaking which left mathematicians impressed by the rigor but disappointed by the result.
In the end the problem was set aside by basically making that kind of operation invalid. This more restrictive axiomatic set theory remains the current standard approach. When considering the original problem, though, it remains possible to create a paradoxical or abnormal set if you want to. The set of all sets which do not contain themselves was Russell’s original example. Such a set cannot be complete and consistent since a set so defined must contain itself, invalidating its predicate.
Gödel’s Paradox and the Self
In 1930, at a meeting in Königsberg, Kurt Gödel presented a new and innovative theorem. In it he proved that there were fundamental limitations to mathematics in that no formal algorithmic system operating upon the set of natural numbers could be used to prove all truths about the set of natural numbers. He took it further to demonstrate that no such system could be used to prove its own consistency. These are what we know as Gödel’s Incompleteness Theorems.
Russell demonstrated that Frege’s grand project to fully define mathematics as a complete and consistent system of procedures, was impossible. His system could be complete or consistent but not both. In much the same way Gödel demonstrated that Hilbert’s grand project to find a complete and consistent set of axioms to fully define mathematics was also impossible. Such a system could be complete only if it was also inconsistent.
Yes, it can be argued that the fiendish predicate described by Russell is different from the mathematical structure described by Gödel. Gödel’s closed mathematical system is not a library and yet, the general principle can be explained in plain terms and seems to hold for a number of common situations. This should not be surprising; mathematical truths often illuminate or emulate observable truths in the real world. Both scientists were followed by others, who demonstrated the basic problems with formal algorithmic systems. Such systems find themselves incapable of fully explaining their own facts. They may actually be understood as complete and consistent things only if they are analyzed by an external system, not limited by the rules of the original system. Certain modern scientists argue that these very theorems prove that human self-aware consciousness cannot be algorithmic.
Sir Roger Penrose, polymath and Nobel laureate, argued this in his book Shadows of the Mind:
But a powerful case can also be made that [Gödel’s] results […] established that human understanding and insight cannot be reduced to any set of computational rules. For what he appears to have shown is that no such system of rules can ever be sufficient to prove even those propositions of arithmetic whose truth is accessible, in principle, to human intuition and insight — whence human intuition and insight cannot be reduced to a set of rules.
Penrose is focused on demonstrating that projects intended to find the algorithms that comprise human consciousness must fall short. He is encouraging approaches to understanding consciousness that he believes will be more successful. Penrose builds on the work of Oxford’s John Lucas and has enlisted the help of Stuart Hameroff and Jack Tuszyński. Together they have advanced a very controversial theory that the mind is influenced by the other-worldly behaviors of quantum mechanics wherein the effects of wave function collapse introduce extraordinary forces into the thought process from which self-awareness may arise.
I do not advocate Penrose’ Orch-OR theory but I do find it intriguing. It is the kind of profound unconventional thinking that has made Penrose an intellectual icon; and, as a useful offshoot, great practical ideas often evolve from sometimes impractical or unproved proposals. This notion that the mind may be a quantum construction is intriguing in other ways. Quantum theory tends to postulate phenomena that violate our conventional understanding of time and space. If the mind is quantum, then is it influenced by the higher dimensions wrapped up inside subatomic particles? Does it benefit from the occasional interception of a particle with a non-local partner that has just changed state? Could this proposed physical phenomenon taking place throughout the brain actually induce something as ineffable as a self-awareness capable of introspection? This is part of what pushes this hypothesis to the very edges of respectable science.
Bland Boring Coincidence
If quantum effects are involved, there is an interesting non-local component. A single quantum system, such as an electron pair may span light-years and yet remain intimately coupled in space-time. The two electrons, though distant and distinct, remain a single system: a change to one instantly changes the other. Such coupled particles permeate the cosmos and represent a dense complex of non-local relationships. When I think about that, I think about coincidence in general.
My son went on a walkabout around the age of twenty. My spouse and I didn’t know where he was or if he was even alive. One of our friends found herself in San Francisco at a human rights conference. While walking through the city, she saw our son in the distance. She greeted him, had a brief conversation with him, and was able to report back that he was alive and well.
I’m telling that story but my reader can easily list a dozen such stories: cases where the likelihood of an event was one in a million, but it happened anyway. Now a clear-headed scientist would respond with, “Hey, a million things happened that day, right? That was one of them.” While that statement is undoubtedly true, it remains unsatisfying.
Yes, the human mind is a pattern-matching machine and so whenever a pattern presents itself, sirens go off in the brain. The thousands of ordinary things give way to that one unusual event which remains in the memory for decades. The unusual event that stopped me stepping into the elevator shaft in that dark building; the time I thought about the friend I hadn’t seen for years and received an email from that friend an hour later; the time I was so depressed that I couldn’t think straight and a friend I hadn’t spoken with for months called saying she was worrying about me. Those are just a few readily remembered cases from my life.
Coincidence is so common that by the age of forty, most of us just assume it will happen on a pretty regular basis. While science argues against any mysterious psychic connections between minds, it is odd how often those one-in-a-million events occur. Is it really true that of the one million things that happen, you just see the odd incomprehensible event and that’s the whole story? Years of experiments into Extrasensory Perception have failed to yield conclusive results and it’s hard to argue with that; but, isn’t coincidence a little too common even when considered rationally? Set aside the regular occasional event that could have been telegraphed through some Internet meme. Ignore those events that could have been predicted from your past behaviors. Consider only those cases where any foreknowledge of the event is unlikely. Are there enough such events that you are getting just a little suspicious? Could it be that this isn’t really extrasensory but interconceptual?
The Algorithm of Consciousness
So here’s an interesting corollary: let’s consider a thinking machine. Let’s imagine it’s a highly complex computer, maybe the Summit supercomputer. Imagine what would happen if this massive algorithmic powerhouse decided to understand itself. It would need to do one important thing. It would need to establish a vantage point outside of itself from which to observe its processes. This isn’t the same thing as a monitor program designed to log error messages or to time-out a failed process. This is much more interesting than that. The decision by Summit has to be a decision by more than Summit. It would have to be made by an outside observer not limited by the thing it observes. That outside observer is what we humans call the self.
This is the confusing part. No matter how you think about yourself, what is doing that thinking? For a computer algorithm, someone outside of the computer has to program that algorithm’s goal. Even Summit requires a human to provide it with goals. You, as a thinking human being, establish your own goals and there’s only one way that can happen: you observed what you were doing and, as if you were an outside observer, you sent instructions to yourself! Computers do not do this. I have written simple algorithms and expert systems. I have programmed neural networks and I have crafted dendritic trees that simulated human responses. From my experience, no algorithm and no neural network that I have devised or analyzed has ever questioned its purpose or developed an unpredictable goal. This is simply not what these algorithmic systems do.
Regarding the ability to make choices, we may consider this little excursion. Free will is magnificently controversial because we don’t really understand how people make choices. Nonetheless, we appear to do just that. Choices are not simply what computers do when evaluating a conditional. A human choice steps outside of programming and writes new software. A human choice doesn’t recognize a preprogrammed assertion and insert new code based upon a template, as is common in languages that permit code rewriting. It is an algorithm that realizes there is a deficiency and then it “writes new code” customized to that person’s history and experience. The complexity lies in the fact that this code is built up not only from data but from feelings and attitudes produced by the very non-algorithmic human limbic system.
The Science of Zeitgeist
The problem of self-awareness is only part of it. Could these problems in human experience be related: this strange case of not just self-aware consciousness but also our shared experience — bland coincidence? How could this best be explained? The problem with Penrose’ hypothesis is that it claims that consciousness is not algorithmic when actually the more interesting possibility is that consciousness actually is algorithmic but the algorithm is executed from another system outside the human brain.
All of this leads to the hypothesis upon which I have been musing for the past few months. Russell, Gödel and others have established their constraints on closed algorithmic systems, meaning that in order to establish a complete and consistent view, one must step outside of the system and use other rules that are not part of that system. Penrose tries to resolve these issues for the human mind by claiming that the system is not algorithmic and is therefore not limited by these constraints. I’m wondering if the mind, as a system, might be algorithmic but not closed. What if it is in fact merely a node in a larger system?
If quantum effects are involved at all, is it possible that the individual mind is merely part of something we might call a consciousness field? Could it be that the radical psychic experimenters of the 1970s were not quite as nutty as we recall; but were simply missing the quantum basis for their ideas? If the mind has any quantum component, could the nature of those interactions connect us all into a larger system wherein our self is a construction within our local system made possible by the external rules of the larger shared field? That field could not comprehend itself (Gödel sees to that) but we, as components, could understand ourselves with relation to the field?
If there is some sort of higher-order coupling, it wouldn’t be a way of directly communicating mind to mind; but it could possibly be a repository of a general state of understanding. It could be a science of zeitgeist, a quantum theory of culture, a mathematics of tendency. It could explain why things often just tend to work out. It could explain why our friend picked that one street in the massive city of San Francisco on that one day. It wasn’t foreknowledge; it was merely that conscious beings seem to be strangely coupled. It may help us understand why subatomic particles tend to be observable in the form that the experimenter expects (wave or particle). It may help us to understand how things like gay marriage suddenly became nearly universally acceptable after centuries of denial and repression. It may help us to understand how ill-conceived mass movements within societies arise mightily and then quickly fizzle out as if the surrounding culture starved it of available energy.
Neoliberals remind us that we are isolated individuals, clawing and scrabbling to satisfy our own selfish interests. We as humans, however, crave community wherein we cooperate to further a better future for all. Could this craving be a simple expression of our coupled minds? Not some fantastic super power like ESP; but just a common shared understanding from which community may be manifested? Is community established only through the highly restrictive local phenomena of body-language and symbols; or, could community be a higher-order shared feeling? Could it be a shared consciousness generated from our brains’ shared participation in a shared cosmos-wide field? Could we benefit from exploiting this; from understanding it; or at least from imagining that it might be so?
Julian S. Taylor is the author of Famine in the Bullpen a book about bringing innovation back to software engineering.
Available at or orderable from your local bookstore.
Rediscover real browsing at your local bookstore.
Also available in ebook and audio formats at Sockwood Press.
This work represents the opinion of the author only.