Understanding Consciousness

A New View Anchored in Extended Naturalism

Gregg Henriques
Unified Theory of Knowledge
32 min readSep 7, 2024

--

By Gregg Henriques and John Vervaeke

This blog details our approach to understanding consciousness by offering our analysis in relationship to a presentation at the World Science Festival, What Creates Consciousness?, made last month by the physicist Brian Greene, the philosopher David Chalmers, and the neuroscientist Anil Seth. We compare and contrast Extended Naturalism’s framing on consciousness with their discussion, and explain why it offers a richer and more coherent picture than the current state of the art.

We have developed a new approach to understanding human consciousness called Extended Naturalism (EN). It does not fit neatly into the the traditional categories in the philosophy of mind, which are: 1) materialism (everything is matter), 2) idealism (everything is mind), 3) panpsychism (mind and matter are intertwined at all levels), and 4) dualism (mind and matter are fundamentally different substances).

EN offers a fresh perspective by providing a more extended view of both the natural world as mapped by science and a more extended conception of human consciousness, both of which are placed in a larger philosophical framework.

In contrast to the classic “physical vs. mental” divide that frames modern discourse, EN is grounded in a clear model of emergence connected to a coherent naturalistic ontology that identifies the layers and levels in nature mapped by science. First, there is an “an Energy-Information Implicate Order,” from which “planes of existence” emerge, which are named as follows: 1) Matter-Object; 2) Life-Organism; 3) Mind-Animal, and 4) Culture-Person. Our model is a combination of John’s philosophical work on the relationship between reductionism and emergence, as well as UTOK’s naturalistic ontology, as mapped by the Tree of Knowledge System and Periodic Table of Behavior. The combination results in a robust, coherent naturalistic ontology in which human consciousness can be placed much more readily than a traditional materialistic view.

In terms of human consciousness, EN pushes the boundaries of current thought. Our respective work combines to give a metatheory that affords us an updated grammar and vocabulary for discussing mind and consciousness, along with related concepts like mindedness, behavior, self, and cognition (for more, see here). This richer conceptual tool box allows us to cut through much confusion that exists in the literature. It also allows us to bridge what consciousness is (i.e., the philosophical “nature” question) with how consciousness functions in the world (i.e., based on theory, research and phenomenology).

To help readers see why EN is a novel approach, in this blog we compare it to the current “state of the art” in consciousness studies. To do that, we follow along a discussion, What Creates Consciousness?, that was held a month ago at the World Science Festival and featured the physicist Brian Greene as the host who interviewed the philosopher David Chalmers and the neuroscientist Anil Seth.

Introduction to the Discussion*

As Brian Greene introduced the topic, he offered the following comment:

In any discussion of consciousness, it is important to get one thing straight at the outset — we do not know what consciousness is.

We consider this opening line both problematic and revealing. It’s problematic because it mixes up two different issues related to understanding consciousness. Yet, it’s revealing because once this confusion is cleared up, we can place EN in relationship to the status quo.

We can first locate the statement in terms of its current intellectual lineage. It is a reference to David Chalmers’ well-known analysis of the “hard problem of consciousness.” Detailed in his influential book, The Conscious Mind: In Search of a Fundamental Theory, this is the argument that subjective conscious experience is an extremely difficult thing to explain from the vantage point of a standard, reductive physicalist approach. This makes it mysterious, which, in turn, leads to the common claim that no one truly knows what consciousness is.

The issue with this framing from our vantage point is that it overlooks the fact that we’re dealing with two distinct challenges when grappling with the hard problem.

First, there’s the philosophical challenge of developing a worldview that harmonizes the “physical” and the “mental.” This is often referred to as the mind-body problem. In A New Synthesis for Solving the Problem of Psychology: Addressing the Enlightenment Gap, Gregg characterizes this as the Enlightenment Gap (EG), and details why it gave rise to what he calls the “problem of psychology.” The EG refers to the ontological problem of placing mind in relationship to matter and the epistemological problem of integrating science with subjective and social forms of knowing.

Crucially, the EG has little or nothing to do with the brain, or the “physical mechanisms that give rise to subjective experience.” Rather, the problem is with the concepts of “mental” and “physical” and how they are related, as well as how we know about the world via science, subjectivity, or the social construction of reality.

In addition to this philosophical issue, there’s a more specific problem: How is it, exactly, that neurocognitive activity generates (or coherently relates to) subjective conscious experience in animals and humans? This is a question largely tackled by neuroscience and psychology/cognitive science. We refer to it as the neurocognitive engineering problem because it is more about mechanism. While the neurocognitive engineering problem is significant and in some ways mysterious, there are excellent reasons to consider it to be a different problem than the broader mind-body problem that can be well-characterized as the EG (see here for a richer analysis of the mind-body problem).

This brings us to one of the key points we make as part of EN: It’s essential to clearly distinguish between the philosophical mind-body problem, framed as the Enlightenment Gap, and the neurocognitive engineering problem, which relates directly to things like the neurocognitive correlates of conscious experience and the brain-consciousness mechanism question.

With this point clarified, we can now state why we disagree with the opening line that “we do not know what consciousness is.” From our perspective, grounded in EN, we believe both mind and consciousness can be clearly defined and understood in the context of natural world.

We consider the philosophical problem to be akin to trying to solve a puzzle without the edges. EN is framed as a new approach precisely because it claims we haven’t effectively mapped nature or human consciousness. Once we do, much of the philosophical confusion surrounding consciousness disappears.

There are many ways that EN helps clarify our understanding of consciousness. Here we will offer just one example, which is detailed in this blog, called the Three Loops of Human Consciousness (see also Unpacking the Consciousness Suitcase). We think it is important to realize that there are three broad definitions of consciousness, and label them and specify how they are related.

We call them Consciousness1, Consciousness2, and Consciousness3. Consciousness1, also known as “creature consciousness,” refers to basic arousal and functional awareness and responsivity. Consciousness2, often called subjective or phenomenal consciousness, involves the first-person, qualitative experience of being in the world. Consciousness3, often called self-consciousness, refers to explicit self-awareness of one’s being, usually through narrative language, introspection and/or self-report.

It is crucial that we keep each of these definitions in mind and consider how they relate to one another. EN summarizes their relation in humans with a simple mantra: Consciousness3 narrates Consciousness2, which models Consciousness1.

Given all of this, we think it is misleading to say we do not know what consciousness is. That is, if you have the right philosophical frame, then we can clearly specify what we are talking about and how it fits into the naturalistic world as science understands it. At least, that is the case if you operate from an EN perspective.

Greene continues…

Each of us — I think but I do not know for sure — can attest to what the experience of consciousness is, what consciousness feels like, and some of us can further attest, through meditative practice or chemically induced modifications, to what altered states of consciousness are like.

But we are still very much in the dark regarding how it is that configurations of material particles that themselves do not seem to have any kind of inner world, somehow, in aggregate, generate inner worlds of phenomenological experience.

Now, look, some will consider this a mystery and say that I have phrased it with undue bias. They’ll say it is not that matter makes mind, but rather that mind makes matter, or, in another variation, mind transcends matter, or, in another variation, matter, even at the level of fundamental ingredients, does contain the seeds of consciousness, what some have called proto-consciousness.

In this opening summary, Greene provides a concise overview of the key issues in the philosophy of mind, and he gives a nod to the big four approaches in philosophy of mind (i.e., materialism, idealism, panpsychism, and dualism). Unsurprisingly given his background as a physicist, he frames the topic via the position of a materialist, asking how mere arrangements of atoms can lead to subjective experiences.

From an EN standpoint, it’s important to note the background assumption of two distinct worlds: the “physical world” of atoms and objects, and the “mental world” of subjective experiences. This separation stems from the Enlightenment Gap, which created a pervasive division in how we think about the world and our knowledge of it. It’s vital to recognize this, as it shapes the entire conversation before it even begins.

It is also the case that the difference between the big picture philosophical mind-body problem and the neurocognitive engineering problem is only implied, rather than made explicit. It becomes clear in the dialogue, where Chalmers represents the philosophical side and Seth speaks more to the neurocognitive engineering problem. However, we think the conversation should begin with the distinction.

A Note About the Epistemology of Science and Subjectivity

One final issue before we move forward. This pertains to the epistemology of subjective experiences in humans. As is often the case when talking about consciousness, Greene points out that while he knows he’s conscious, he can’t be sure that others are.

Similar to the opening line, we find this bit of rhetoric revealing, and we want to look at it through the lens of the EG. The EG says we lack clarity about both the ontology of mind in relationship to matter and how to epistemologically relate scientific knowledge relative to subjective and social forms of knowing.

Greene’s comment about not being able to know for sure if other humans are conscious allows us to highlight these elements. First, regarding the basic ontological issues, we need to define consciousness. Greene doesn’t, which creates confusion. This is because when people interact with the world, they demonstrate awareness and responsiveness. This means they behaviorally demonstrate what we call Consciousness1. When they explain their actions, they show Consciousness3, or explicit self-awareness. So we can “know for sure” that other people are conscious in this sense.

The tricky part is Consciousness2, which refers to subjective experience — something we can’t directly observe in others from the outside. While it’s true we don’t fully understand the mechanisms of Consciousness2, to claim we “can’t know for sure” that other people have subjective experiences is quite a stretch.

David Chalmers made philosophical zombies famous (i.e., a thought experiment where people who behave exactly the same but lack all inner experience). It is an exercise that can be useful in generating clarity about knowing via science and subjective conscious experience. However, we also think it can be misleading.

In this context, let us ask: Can we “know for sure” that the Standard Model of Particle Physics is correct? Most people, including ourselves, can’t personally verify the equations behind the Standard Model because we lack the necessary expertise. Instead, we rely on physicists and mathematicians to share their knowledge, trusting their intersubjective communication. If they’re wrong or dishonest, our understanding is flawed.

This reliance on intersubjective report is true for virtually all of our knowledge, including scientific knowledge. We raise this issue here because there is a tendency to be particularly suspicious of subjective knowledge. This is due, in part, to the epistemological authority that we grant science.

According to EN, we need to recognize that there are fundamentally different epistemological frames for subjective perspectival knowledge and scientific propositional knowledge. And we need things like the Tree of Knowledge and iQuad Coin to frame the relation, which we will return to below (the Tree frames our epistemological position from natural science, the Coin from our unique, subjective perspectives on the world).

Opening Question about Whether AI Can Be Conscious

Greene then introduces David Chalmers and Anil Seth. The conversation starts with Greene asking them if they think AI can be conscious.

Both give reasonable answers. Chalmers says that he thinks it is quite possible that in the future we will be able to build a “silicon machine” that is conscious. He comments that the brain is a biological machine made up of neurons. We don’t know how currently it produces consciousness, but he thinks someday we might figure that out and, with that knowledge, we might uncover how to build machines capable of achieving the same “magic.”

Seth says it depends. Consistent with the points we are making above, he complains that we too often conflate intelligence with subjective conscious experience. He also argues that subjective conscious experience may not arise from computational processes alone and that the structural nature of biology in general, and the brain in particular, may be required.

We think greater clarity on this question could have been achieved with the Consciousness1,2,3, framework. For example, artificial intelligence systems clearly exhibit a kind of functional awareness and responsivity. That is what “intelligence” implies. In this sense, they have “artificial creature consciousness.” In addition, it is clear that some LLMs have some forms of self-referential capacities. Thus, they exhibit aspects of Consciousness3.

Of course, if the referent is Consciousness2, then it is clear that artificial intelligence systems do not have a subjective experience of being. Seth makes this particularly clear. Indeed, his point about biology points highlights the need to get clear about the relationship between the functional aspects of consciousness and how they are realized, especially via the core structures that have evolved across natural history, such as brains.

Defining and Aligning Mind and Consciousness

EN extends our view of both the world and of consciousness. A key point it makes is that we need a richer vocabulary for both mind and consciousness and how they are related if we are going to solve the philosophical aspect of the hard problem (i.e., resolve the EG).

We have already shared the way we frame Consciousness1,2,3. We can now relate it to mind. EN uses the ToK/PTB to identify the Mind-Animal plane of existence. This plane is characterized by the sensorimotor loop in animals with brains and complex active bodies. The property here is mindedness. This concept is missing in our scientific vocabulary, much to our detriment.

As this blog on the layers of mindedness makes clear, we can divide mindedness into Mind1, Mind2, and Mind3. Mind1 refers to the functional awareness and responsivity in animals with brains and complex active bodies, Mind2 refers to subjective conscious experience in animals, and Mind3 refers to self-conscious justification processes in human persons.

Notice that these three layers of mindedness closely correspond to the Consciousness1,2,3 formulation. The difference is that in EN’s vocabulary, mind is anchored to the natural world via animals with brains and complex active bodies. Thus, Mind1 is how animals exhibit Consciousness1 properties, Mind2 is Consciousness2 in animals, and Mind3 is Consciousness3 in humans.

This means that consciousness and mind are related, but also different in that the former is more generalizable, and references the functional properties alone, whereas the latter is more anchored to structure and natural history.

We are not done. Using UTOK’s Map of Mind (MoM), we can obtain even greater specificity on the terrain. The MoM maps the territory of mindedness by highlighting three crucial divisions. First, it separates behaviors that take place inside the animal versus outside. This is the difference between neurocognitive activity and overt behavioral activity. Second, the MoM differentiates between the interior and exterior epistemological positions (i.e., the difference between first-person subjective phenomena and third-person objective behaviors). Third, the MoM differentiates between neurocognitive processing and sociolinguistic processing, which follows from the logic of the ToK’s planes of existence.

The result is that the MoM gives us five different domains: I) Mind1a is the domain of neurocognitive activity within the nervous system; II) Mind1b are overt minded behaviors that take place between the animal and environment; III) Mind2 is the field of subjective conscious experience from the interior epistemic position; IV) Mind3a refers to the private, inner narration; and V) Mind3b refers to overt speech or verbal behavior.

A summary point from our perspective is that the current conceptual grammar and vocabulary for consciousness, mind, cognition, brain, behavior, self, psyche, and psychology are flawed, confusing and inadequate. This is because of the Enlightenment Gap. We need a fresh start. A new operating manual for our terms. That EN/UTOK affords this is made clear in the Now UTOKing Series: Learning the Language, which is a blog/video series that defines all these terms and more.

The Hard Problem of Consciousness

After the question about AI, the conversation shifts, and Greene asks Chalmers about his formulation of the hard problem. Here is a portion of Chalmer’s answer:

I should say this was never an original observation. I think you know, everybody knew in their bones, that consciousness posed a hard problem. This label just kind of crystallizes the problem and makes it a bit harder to avoid, but, you know, you go to a conference on consciousness, and you find that people talk about many different things. Sometimes it’s just used for the difference between being asleep and being awake, sometimes it’s used for the ability to control your behavior in certain considered ways, sometimes it’s used for the ability to report on certain internal states, but I think, where consciousness is concerned, those things are actually what I call the easy problems.

This excerpt hits relevant issues. Using the grammar of EN, we can see clearly that Chalmers is marking off Consciousness2 from Consciousness1 (i.e., being asleep versus awake, functional control of behavior) and Consciousness3 (i.e., ability to report on certain internal states).

We can also align this with Mind1,2,3. Consciousness2 corresponds to Mind2 in animals and humans. It is the domain of mindedness that resides across the epistemological gap. The other domains are accessible via behavior and the brain. This means that the domains of Mind1 and Mind3b are accessible via the methods and epistemology of standard science. That is, they are readily grounded in “objective” data and this is part of what makes them “easy” to study from the traditional lens of science. Thus, the MoM frame aligns very well with Chalmers answer.

We believe that the nuanced, descriptive vocabulary and grammar afforded by EN and the MoM enables us to keep our referents clear. Consider for example, the difference between framing the hard problem of consciousness as the hard problem of Mind2. Framed this way, we have boxed in the problem to be within animals and brains and complex active bodies. In other words, it zooms in on the neurocognitive engineering aspect of the problem.

In contrast, Consciousness2 is a broader question that relates more to the philosophical question of how we understand the properties of consciousness in relation to the “physical” universe, and as well as questions about the future of AI.

Finally, the concept of mindedness also helps us see the kind of consciousness that constitutes Mind2 in animals and humans. With this concept in place, the domain of Mind2 is a neurocognitive virtual world model of mindedness experienced by an animal subject. And from an EN vantage point, this is a clear answer for what is meant in most contexts by “consciousness.”

Mary the Scientist and What and How We Know about Color

After Chalmers discusses the hard problem, the conversation turns to one of the classic thought experiments that philosophers have discussed extensively over the years. First put forth by Fred Jackson, this is the story of Mary, the brilliant neuroscientist who, at some point in the far future, has figured out everything there is to know scientifically about color vision. However, Mary lives in a black and white room and never has seen color.

Here is the transcript of the punchline of the thought experiment and the question it raises:

One day Mary is allowed to leave her room and the very first thing she sees is a plump red tomato. Now, here’s the question: From this experience of the color red, will Mary learn anything new? Will she shrug and just move on, or will she be surprised or thrilled or moved or gain some new insight through this actual experience of color? And, if she does, what does that tell us about the limits of a purely physical description of the brain and consciousness?

Chalmers finds value in the thought experiment, saying:

I like the thought experiment. I mean, this thought experiment has been used for many different purposes, but I think one thing it does wonderfully is illustrate the gap; a certain kind of gap between our understanding of the objective world and our understanding of consciousness.

Seth is a bit more critical of the value of the thought experiment:

Dave is right. There is a gap here but for me it’s not a surprising gap. You knowing about the details of how something works doesn’t necessarily give you the experience of being that thing; like, if I know everything about flying, I don’t become able to fly. So I imagine that if Mary did know everything there is to know and she goes out of the door, she might say: Oh that’s exactly how I would expect it, so she would potentially shrug, but of course she would still learn something new, because she would have an experience she hasn’t had before.

The Different Kinds of Knowing (Keeping the 4Ps in Mind)

Our take on this thought experiment is similar to Seth’s. In addition to clarifying our terminology about mind and consciousness, EN comes with clarification regarding the nature of cognition and knowing. This is via John’s 4P/3R formulation of cognition and knowing. It clarifies the nature of knowing by explicitly dividing it up into the four domains of propositional, perspectival, participatory, and procedural.

From our vantage point, the basic confusion here comes from an idealization of the kind of knowledge that science can achieve. This exaggeration of propositional knowledge has been made especially clear by Iain McGilchrist and his work on the different hemispheres, which overlaps some with the taxonomy of knowing and both point that propositional knowledge does not translate into knowing everything. In short, from an EN vantage point, a broader, clearer taxonomy of human knowing would have enabled the debate about Mary to been resolved quickly.

The Real Problem of Consciousness

In an Aeon article in 2016, Seth offered a different way to frame the problem of consciousness, one that he claimed existed “between” the easy and hard distinction that Chalmers had made more than 20 years prior.

“The real problem,” wrote Seth, “is how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).”

Greene brought this up and asked Seth about it. Seth makes the joke that it was really designed just to annoy David, but then goes on to argue the frame was introduced to keep people focused on the fact that some progress was being made. He then discusses the analogy with the problem of life 150 years ago. At that time, there was no good way to frame how life could come about, and so folks posited things like “elan vital” as a novel force. But, over time, the chemical mechanisms and related processes were found and life was transformed from a mystery to a problem within the realm of science.

Chalmers responded that he believes the analogy is only partial. He expressed support for the progress being made on things like the neuro-correlates of consciousness and that it is getting more and more differentiated. However, he argued the parallel was only partial because when we are dealing with life, we are still dealing with objective behaviors. The real trick of the hard problem is the problem of subjectivity. Here, we can see that the hard problem is related to issues of mechanism, but also issues of our philosophical understanding. The subjective versus objective is an epistemological distinction.

Greene then asks Seth about his approach to understanding consciousness. After giving a brief nod to global neuronal workspace and integrated information theory, Seth replies:

The theory that I tend to favor is a collection of ideas, really, but I just put it in a particular way. [The key] idea is that the brain is a prediction machine, so, arguably it’s not really a theory of consciousness at all because it does not say, like, these are the sufficient conditions, and then, boom, consciousness happens…The idea of the brain as a prediction machine goes way back, and it’s really this idea that everything the brain does pretty much involves it making predictions about the causes of sensory signals, and then using sensory signals to update these predictions.

When it comes to consciousness, the idea is that everything that we’re conscious of, whether it’s an experience of the world, whether it’s an experience of the self, whether it’s an experience of free will or volition, is a kind of perception. It’s the brain trying to make sense of a situation in some way, and in that framing every kind of conscious experience can be thought of as underpinned by this process of the brain making and updating predictions, but in different ways and in different contexts.

The sort of slogan for this is that perceptual experience is a kind of controlled hallucination, that we don’t read the world out objectively, but creatively construct it, then, the way I take it, is that doesn’t just apply to the world around us. It applies to the experience of being a self within that world; it applies to emotion; it applies to free will. Ultimately, it’s all about physiological regulation of the body. The reason brains do this prediction is because prediction allows control, and brains evolved, I think, fundamentally to control, regulate, keep the body alive, and that kind of leads to, if you put on this thread long enough, you get to this intimate connection between how consciousness seems to us and the fact that we are living, breathing, energy consuming creatures.

EN shares significant overlap with Seth’s view. First, he acknowledges an integrative bent, pointing out the fact that various theories are about different aspects of cognition and consciousness. Second, he makes it clear we need to deeply consider evolution and biology. Third, he emphasizes predictive processing as a core, organizing principle. Fourth, he characterizes consciousness as emerging as a kind of virtual representation that is modeling the animal-environment relationship (what he calls a controlled hallucination).

There are, however, also some important differences. First, EN makes clear that Seth is concerned primarily with Mind2 rather than consciousness in general. Second, via John’s work, EN extends the concept of predictive processing to recursive relevance realization. This incorporates all the key insights of predictive processing but does so in a way that provides a richer and clearer delineation in a way that helps us understand human cognition and phenomenology.

In addition, recursive relevance realization is a general model of cognition that then can be extended into a model of subjective conscious experience. Specifically, as we discuss in Untangling the World Knot, conscious contents emerge via a focal integration of recursive relevance realization.

This frame sets the stage for EN’s clarification about the nature of Mind2, which is that it is a field that consists of adverbial qualia (the relevance realizing frame that focally indexes attention and consists of the hereness-nowness-togetherness that binds and broadcasts experience), adjectival qualia (these are the direct sensory-perceptual properties like redness), and valence qualia (i.e., these are the positive or negative valuations attached to the experience). This shows that EN brings additional concepts to more richly frame the distinction between Mind1a processes (i.e., nonconscious neurocognitive activity) and Mind2. It also gives us a much more detailed mapping of what might be called the subjective qualitative meaning making system (i.e., the field of Mind2; for more on this see here).

Another difference with EN and the discussion so far is that there has been little to no discussion about Mind2 from a Mind3 vantage point. Via UTOK’s Justification Systems Theory, EN provides us a clear theory of Mind3. Since Mind3 is the access pathway to Mind2 in humans, this is another crucial point that seems missing in the account. Put simply, if the focus is on human consciousness, EN can box Mind2 in with Mind1 and Mind3.

Another point of difference is that Seth often describes his position as being akin to John Searle, who advocated for a biological naturalistic approach to consciousness. At one point, for example, Seth says that “consciousness is another kind of biological property.”

With its map of the ontological layers and levels of nature provided by the ToK System, EN challenges this claim. Mind2 is, according to the grammar of EN, a psychological property. That is, it emerges and exists as part of the Mind-Animal plane of existence, and that corresponds, scientifically, to the domain of basic psychology which is framed by mental behaviors characterized by the property of mindedness. It follows from this that, while it is not wrong to call it a biological property, it is misleading. In short, EN helps us see why there is an ontologically specifiable psychological layer in nature, just like there are biological and material layers.

Possible Solutions to the Hard Problem

The conversation then turns back to David, and he then offers his reflections on possible solutions to the hard problem. He acknowledges that he likes predictive processing models, but sees them mostly in terms of solving the easy problems pertaining to functional behavior and neurocognitive processing. He says that the solution will involve going deeper than biology.

I think biology is somehow a little bit too high level, in a way I suspect it’s going to connect to something like that: If you look at the correlations between consciousness and the brain, it’s really the informational properties of the brain that matter, and not ultimately the biological properties. What I’m really looking for is some kind of beautiful mathematical equation that connects information and computation in the brain to consciousness. Integrated information theory does some of that, but I’m very skeptical about that for some other reasons, but it’s at least trying to do the right kind of thing and coming up with a fundamental principle.

Greene then asks if Chalmers embraces some kind of dualism, the idea being that the universe is fundamentally made up of both matter and consciousness. Chalmers goes on:

I’m open to that kind of view, and in philosophy we sometimes talk about property dualism because when you say dualism, people a lot of the time think about a soul, some non-physical entity that got attached to our body and is hanging out with our brain and interacting, and then continues living after the body dies.

That’s not the kind of thing I have in mind, but the idea is rather that there could be fundamental properties of the universe, beyond space and time and mass and charge, or whatever the latest you know fundamental physical theory says. If it turns out that existing properties don’t explain consciousness, then we should be open to the idea that consciousness is itself a fundamental.

Importantly, there might be fundamental laws, I call them psychophysical laws, connecting physical processes and consciousness. These do not need to be unscientific or spooky; it’s just one way things could go.

Chalmers’ reflections here align with the core philosophical structure of UTOK. Specifically, UTOK argues that the “language of science” and the “language of the subject” have different grammars and that we need new frameworks for bridging them.

UTOK gives us the Tree of Knowledge System to see the world via the standard scientific view. That is, it frames the world as an unfolding set of behaviors that take place at different levels of complexification. This is what Extended Naturalism frames as extended emergence, which in turn gives rise to the ontological layers, planes, and levels of behavioral complexification that are mapped by UTOK’s ToK System and Periodic Table of Behavior. This view is one that is framed as a third person, exterior behavioral view that is available to trained observers in a way that affords an “intersubjective objective” large-scale justification system.

In contrast, the iQuad Coin is structured to start with the phenomenological perspective of the human subject in the world. It has a specific architecture that allows it to bridge the unique, idiographic perspective of each human subject (e.g., the lived perspective of Brian Greene, David Chalmers, and Anil Seth as they report on their experience) to the language of science, as framed by the ToK System.

When we talk about “marrying the Coin to the Tree” in UTOK, it is very much about bridging the two sides of the “informational coin,” namely the world as it looks via the lens of science and objective behavior and how it looks via the perspective of the subject and one’s own phenomenology. In short, UTOK has a frame built in for what Chalmers is pointing to via a kind of informational property dualism.

This image shows how UTOK’s iQuad Coin and Tree of Knowledge map the territory Chalmers is talking about. The iQuad Coin gives the interior epistemic frame of the human subject, whereas the ToK provides the third person, exterior scientific view. It is a picture of Davide Chalmers on the Coin, which captures the fact that he experiences the world from the subjective vantage point.

Chalmers then reflects on the possibility of panpsychism:

Another way things could go is that it could turn out there’s some element of consciousness at the very basis of matter. It’s a view that you mentioned, the view people call panpsychism, and that’s extremely speculative, but it’s a view I take seriously. If someone comes up with the scientific form of panpsychism, then I think we should take that seriously, so the particles themselves would have potentially some kind of seed of inner experience, and when you put enough of them together in the right way, the aggregate yields the conscious experience. The real problem for this view is — some people think this is loopy or crazy — but for me the biggest problem for this view is precisely that aggregation. How do you get take a bunch of conscious particles and put them together and get the kind of unified conscious experience that I’m having right now? That’s called the combination problem, and nobody has a good solution to it. But if somebody solves that problem then that’s instantly a contender for solving the hard problem.

Extended Naturalism extends our conception of the material world “upward,” by emphasizing that there are Life-Organism, Mind-Animal, and Culture-Person planes of existence that operate at higher levels of organization, mediated by novel information processing and communication networks (e.g., genes-cells, neuronal nets-animals, propositions-persons).

In accordance with modern physics, it also extends the Matter-Object plane “downward,” and points to the existence of a nonlocal, Energy Information Implicate Order. The evidence for this is found both in Big Bang cosmology that is derived from general relativity (and much empirical observation), and quantum field theory, and especially quantum information theory.

Why is this relevant? Because it suggests that there is a continuity of complexified information that extends from the foundational “energy-information” layer all the way through the stack to human consciousness. This can be interpreted as a kind of “proto-panpsychism” in that it identifies energy and information as continuous across the stack. In so doing, UTOK makes the point that our modern understanding of the fundaments of the physical world, as energy and information, are much more aligned with “the mental” than the conception of the world given by Newtonian mechanics (i.e., atoms and the void).

Thus, while UTOK does not embrace any kind of “substance” panpsychism, there is an energy-informational continuity that overlaps with some weak versions and EN’s model is broadly consistent with a kind of integrated information model, as discussed in A New Synthesis.

At this point, Seth chimes in with some additional hurdles for a panpsychic view that we concur with, which is that as they are presently formed they are not empirically testable:

I think there are other problems with it as well. I think all the versions of this idea of panpsychism that they’ve encountered all face the problem that, not only is it not testable in itself, but it doesn’t lead to testable predictions. That doesn’t mean it’s wrong it just means that it’s very hard as a scientist to know what to do.

Brian Greene then wonders about consciousness as a “fundamental quality” that cannot be specified. He likens it to some fundamental properties in physics.

If you ask me what you mean by the mass of a particle I’d actually tell you functionally what the mass does, how it responds to gravity, how it responds to forces. If you said to me, what do you mean by the electric charge of a particle, I kind of play the same game. I’d say, well, in an electric field it will do this or that based upon the charge, but I would be unable to tell you what mass is, and what charge is, there are primitive fundamentals that exist in the universe.

I’m willing to say okay, they exist by fiat. I know they’re there and going forward it could be that one day we simply say consciousness is just this fundamental quality of reality, and it doesn’t have a deeper explanation, and you take it as a given and you go forward.

Chalmers points out that there is a “hard problem of matter” and shows that there are some analogous difficulties when we try to say what aspects of matter truly are:

This is great because the Norwegian philosopher Hedda Hassel Mørch has called this the hard problem of matter. Like you say, just like we don’t know what consciousness is, we actually don’t know what mass is. You know, physics tells us what mass does and the equations that are involved, but what not actually is mass, what is the intrinsic nature of mass, or of charge, or maybe even of space and time. Philosophers and scientists argue about this all the time. Is the universe just mathematical, is it structural, and so on.

I mean a lot of people I think want to say there is no intrinsic nature of mass, that’s just a chimera, you’re looking for what mass does, that’s all there is.

And so, somebody could take that view for consciousness too. All there is to consciousness is what it does. The trouble is, in the case of consciousness, what it does, well, that’s just the easy problem, and it leaves out the central datum of subjective experience. If somebody finds a way to take subjective experience that seems intrinsic and just turn that into a problem about what consciousness does, then that might be an avenue to a solution. But, so far, anytime anyone does that, which happens a lot, it just looks like a bait and switch. You’ve moved from talking about consciousness to talking about behavior.

Chalmers is making a crucial point here that EN/UTOK also makes. Specifically, Chalmers is highlighting that the language of science IS the language of behavior from a third person point of view. This is, according to UTOK, is the fundamental structure of science as a knowledge system. Indeed, UTOK argues that behavior is the core metaphysical, ontological, and epistemological concept that defines natural science.

Natural science is fundamentally defined by the concepts and categories of behavior, which are entities, fields, and change. Natural science is framed by behavior ontologically (i.e., science maps patterns of behavior at various layers and levels), and science is defined by behavior epistemologically (i.e., the systematic third person empirical observation and quantification of behavior defines the methods of science). Science is about the frequency of observed behavior patterns from the exterior view.

Consciousness2, in contrast, is about the frequency of observed behaviors from the interior view. UTOK’s ToK System frames the former, and the iQuad Coin the latter. This is why UTOK is such a different system. It comes with structures that bridge the language of exterior behavioral science and language of the interior phenomenal subject.

AI and Consciousness

The time is beginning to become and issue, and Greene returns to one of the major questions at hand pertaining to AI:

I do want to get to this issue of AI systems, and so, you know, we’re now in a realm where there are computational systems that are mimicking certain aspects of behavior. They’re able to respond to certain prompts in a way that ordinarily we would have thought only an intelligent human being could do, and of course the question arises if these systems conscious.

Anil Seth replies:

This is another very hard problem about how we test for consciousness in things that are not us. We face this even with other human beings. I mean, it’s often said that I only know for sure that I’m conscious, it’s a just an inference that you are or that any of you are.

The further we get away from the benchmark of an intact human being, the harder it gets. Even with human patients suffering brain injury is very difficult to know whether they’re conscious because whether they are or not can be dissociated from their behavior, their ability to tell you that they’re conscious, and then the further we get, we have huge debates about non-human animals. There was a recent New York declaration about animal consciousness trying to just put the idea in people’s minds that many non-human animals might be conscious

When it comes to computers and AI it’s so much harder. I think here we’re misled by our psychological biases. Now, we as humans we have got a pretty terrible track record of withdrawing or withholding moral consideration from things that are not like us. Part of the reason we do this is because they don’t seem sufficiently similar to us in ways that we think matte, and the ways that we think matter tend to be things that we think make us special, like language and intelligence.

Of course, it’s questionable how intelligent we are as a species but we tend to elevate ourselves and think, okay, no language, no consciousness. Descartes did something like this many centuries ago, so we might make false negatives.

I think we’re in almost exactly the opposite situation. We have these language models that exercise our biases, in that they speak to us and they seem to be intelligent in some ways, and clearly there is something interesting going on there, but because they’re similar to us in the ways that we elevate and tend to prioritize, we project qualities into them that they probably don’t have, like thinking, understanding and, of course, consciousness. Whereas they’re very different to us in other ways, and it’s those other ways in which they’re very different that might actually matter for consciousness.

From an EN perspective, this aspect of the discussion is hampered by the failure to have an effective vocabulary for consciousness and mind/mindedness.

First, as A New Synthesis documents, consciousness for Descartes was Consciousness3/Mind3. Indeed, precisely because we lacked the vocabulary for Consciousness1,2,3/Mind1,2,3, it is unclear what actually Descartes was saying about animal consciousness/mind.

A New Synthesis argues that a close read of both Descartes and Romanes, who argued strongly for animal consciousness, actually paint a similar picture. Both clearly agreed that first there is Mind1/Consciousness1, then there is Mind2/Consciousness2, and finally, Mind3/Consciousness3. The issue was that, for Romanes, the center of consciousness/mind was Consciousness2/Mind2, whereas for Descartes it was Consciousness3/Mind3. Once this point is seen clearly, then we can understand why Romanes says animals are definitely conscious, whereas Descartes says only humans are (fully) conscious.

Given that, in science, our definitional focus has shifted to Mind2/Consciousness2 over the years, especially with the emphasis on the hard problem, we can understand why we now consider consciousness to be much more prevalent in the animal kingdom.

This frame also helps with artificial intelligence. Clearly, artificial intelligence systems are not and cannot be minded, as defined by EN. Mindedness requires a brain and a complex active body. With AI we are seeing aspects of Consciousness1 and Consciousness3, with no evidence of Consciousness2. A clearer vocabulary could have cut through all of this. It also can specify the unique nature of human consciousness, which is that it is its Mind3/Consciousness3 properties.

EN’s Consciousness versus Mind clarifies the distinction between function and the structure/function connection. That is, Mind, by definition, requires a brain, and thus it is embedded in a substrate. Consciousness is defined simply by the functional properties. The issue of substrate emerges as the conversation evolves. In the next section, Chalmers brings up the work of Susan Schneider. She developed the thought experiment of gradually swapping out neurons for silicon chips. The idea is that, maybe, over time, we could replace someone’s brain with a computer, but they would remain. That would be an example of retaining consciousness, but no longer being minded in the structural sense.

It is an interesting idea, but, as Seth is quick to point out, it is highly speculative and potentially misleading. There is good reason to believe that it would be impossible to accomplish such a feat, given the remarkable structural differences between neurons and silicon chips. The conversation briefly shifts into the uniqueness of human consciousness and its being a model of consciousness. Seth rightly reiterates that there is very good reason to believe that consciousness manifests in many varied forms across the animal kingdom and that we have tended to be ego centric in placing our consciousness in a unique, apex spot.

This is accurate, but also needs to be qualified from an EN perspective. Mind3/Consciousness3 is unique in humans in the degree to which it is developed. Part of that, of course, due to the emergence of the Culture-Person plane of existence. Indeed, precisely because of (a) the ambiguity about what consciousness is (b) lack of clarity about Mind2/Consciousness2 and Mind3/Consciousness3 and (c) so few people know about or understand UTOK’s Justification Systems Theory, there really was very little explicit discussion about the uniqueness of human consciousness in the conversation.

Given that the title of the talk was “what creates consciousness?” it is interesting that there is a readily clear answer regarding the question: What creates Mind3/Consciousness3 in humans? The answer being the evolution of propositional language and question answer dynamics that drives the existence of the Culture-Person plane. But nothing even remotely close to this was mentioned, other than vague references to human language.

Finally, we can use the logic of the ToK System to understand why AI is such an important event. Specifically, it represents a new information processing system and communication network that is interfacing with us at the Culture-Person plane of existence and setting the stage for a potential shift through the 5th joint point into the Digital Global Meta-Cultural dimension.

The conversation ended with reflections on consciousness and ethics. The reflections offered emphasized the central role conscious experience plays in moral/ethical consideration, and thus pointing to why questions of consciousness quickly pull in value based considerations, such as the way we treat animals and how we might radically shift our perceptions of AI if we deemed them conscious at some point.

Conclusion

The conversation, What Creates Consciousness?, enables us to spend some time with the leading thinkers in the area. The reflections were useful and insightful. However, we think the exchanges represent where our current knowledge is, which is still floundering around in the Enlightenment Gap.

We can do better. EN extends our view of the natural world and our understanding of human psychology and phenomenology, and how all these elements are related. It also clarifies the relationship between a scientific and subjective perspective on the world. And it provides a vocabulary for thinking about mind and consciousness that clarifies many key points. The bottom line is that we now have a philosophy and metapsychology that can address the mind-body problem and thus address a major aspect of the hard problem.

*Discourse markers and punctuation were altered in the transcripts to improve readability, without modifying the content.

--

--

Gregg Henriques
Unified Theory of Knowledge

Professor Henriques is a scholar, clinician and theorist at James Madison University.