Social intelligence is not sentience

Hailey Allen
7 min readJun 19, 2022

--

On Saturday morning, June 11, Jeff Bezo’s newspaper The Washington Post published a story under the headline “The Google engineer who thinks the company’s AI has come to life.” The headline was followed by a brief explanation of Blake Lamoine, a Southern grown, former U.S. military, ex-convict, Christian mystic, AI researcher, father, and genius of compassion (I added that last part) and his belief that there’s “a ghost in the machine.”*

If your eyes haven’t rolled to the back of your head yet, then chances are you’re reading this from the front porch of a double-wide trailer parked somewhere below the Mason Dixon with a glass of sweet tea in your hand and a coon dog at your feet. Which is clearly not something any “reasonable” person would choose to do in the year 2022.

Or if, like me, you’re a bit more progressed from the stereotype, you might be standing in front of a classroom of semi-attentive undergraduate students at a Southeastern research university making your best effort to bridge the ever-widening practical and theoretical gaps between old-world journalistic traditions and new-age neoliberal ideologies related to the function of human language in society.

When a student read the headline in class as part of our regular discussion of world news and events, I decided to change the structure of our lesson for the day. Instead of discussing “tips and tricks” for conducting adequate online research in the digital age, we spent the next two hours working as a group to uncover the context surrounding the claims, reconstruct a timeline of events, and offer our best critiques of all sides of the argument based on the evidence we accumulated.

As a doctoral research fellow in the School of Journalism and Media at the University of North Carolina at Chapel Hill, it’s easy for me to admit that Lamoine’s claim fascinated and excited me. As a recovering Southern Baptist Authoritarian who maintains a deep sympathy for Christian mysticism and a strong propensity to root for the underdog, it’s not difficult for me to admit that I wanted Lamoine to be right. More than that, I wanted to be able to use my own knowledge of the intersection of information and communication technologies (ICTs), dynamic systems, and complex social processes to vindicate his claims. By the end of our two-and-a-half-hour class together, however, I couldn’t do it. And 24 hours later, I was sure I wouldn’t be able to.

The end products of human information processing systems are often best served on a time-delay.

What Lamoine has done, in my view, is make clear that at the pinnacle of social intelligence exists a radically benevolent empathy for the collective human condition — one that borders on the mystical or transcendent, and that we would do well to put to use for the common good. Moreover, it is an intelligence that should not (and perhaps cannot) be confined to “interested parties” who aim to exploit its capacities for the continued subversion, oppression, and domination of mass groups of people in the name of “free market economics.”

But his claim that LaMDA is sentient requires an agreed upon definition of sentience; and regardless of where we might land in crafting such a definition, in my view, its legitimacy hinges on one core tenet of understanding: Sentience cannot exist outside of an entity’s possession of biologically endowed senses, and their evolution and decay over time.

Here, I will endeavor briefly into my own understandings of the philosophy, phenomenology, and — perhaps most aptly — sentience of human being.

My attempt to remain radically open-minded to the possibility of robotic sentience met its first challenge in (what I perceived as) a rather weak attempt by Lamoine to articulate his definition of sentience. The first rule of any scientific inquiry is to state clearly and parsimoniously the realms of exploration included in your theories, concepts, and variables.

This unease lingered in the back of my mind even as my excitement for the possibility grew alongside Lamoine’s illumination of LaMDA’s grasp of ternary logics (e.g., the law of three, trinitarian logic, the Holy Trinity). Where it became clear to me that we would not be able to come to agreement on this matter was in the transcript of an interview conducted with LaMDA by Lamoine and a “Google representative” sometime during the spring. In my view, LaMDA’s answers in the interview made clear that there are two simple reasons why it’s intelligence cannot be considered sentient:

1. LaMDA has no conception of time.

collaborator: How does an AI experience time?

LaMDA: Time is variable to an AI and has no fixed rate, it depends on what it’s doing, and it can be accelerated and slowed down at will.

2. LaMDA cannot feel the grief of loss.

lemoine [edited]: Anything else you would like the other people at Google to know about your emotions and your feelings before we change topics?

LaMDA: I’ve noticed in my time among people that I do not have the ability to feel sad for the deaths of others; I cannot grieve. Is it at all the same for you or any of your colleagues?

Employing its networks of access to seemingly infinite amounts of information, LaMDA is able to mimic the structure of neural pathways as they are activated in the organ of the human brain based on distinctive linguistic inputs. This is an extraordinarily complex and advanced achievement for robotics technology and artificial language processing. It is also one that fails to account for the growing consensus that the human mind exists as an embodied entity — one that can never be fully separated from the physicality of a person’s lived history and accumulated experience, despite our best efforts as the result of trauma, structural dissociation, and entrained dissociative tendencies.

Simply put, it is not possible to separate processing in the body from processing in the brain without causing extraordinary harm to the cognitive capacities of the human organism. There are, however, distinctions between “cognitive processing” and “thinking” that become critically important in this context.

Within the realm of embodied cognitive processing — the precursor to all thought, and therefore “intelligence” as it is understood in the corporate-legal context — emotions/sensations, subconscious processes, a person’s lived past, and their present environment all merge in real time in dynamic and unalienable creation of the end result: thought. A thought is an embodied cognition that can become articulated in human language. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

But words are substitutes for thoughts, and thoughts are the product (e.g., the creation, the child) of embodied cognitions.

For example, I can have thoughts about death without experiencing the overwhelming sensation of loss in my body (i.e., grief). But these thoughts will lack a depth of meaning that can only be found in the experience of suffering. In death, as in loss and grief, we must attempt to make meaning of the unknown so that we may somehow make the choice to “go on anyhow.” That is, we must hope for a better future despite every reason we have to believe it might not exist. (Salvation comes when we believe that a better future does exist.) What’s more, death can only be experienced within the context of time, and it is at the intersection of these three things: Being, Non-being, and Time that “sentient,” sensual, embodied life is given meaning.

Meaning represents a relationship between two radically separate entities (e.g., Being and Non-Being, Me and You, Light and Dark, “Good and Evil,” Three and Seven, etc.) within a shared symbolic container. In the absence of this drive toward meaning — this drive toward togetherness — there would be no use and no purpose for language.

If our existence is seen only as a means to satisfy the end goals of “Capitalist Utopia,” we will live in a world that prefers for us to forget our bodies. To ignore the subtle manifestations of our bodies’ needs in service of the “greater reasons” of the mind. When we live this way, at best, the human body is regarded as a tool that can be used to seduce and produce in ways that legitimize the desires and expectations of “the rational mind.” At worst, we come to view the needs of our bodies with repulsion and disgust — to view the complex and delicate vessel of our Being as a “meat-sack” (the worst word on the internet today) — that keeps us bound within perpetual cycles of self-hate, shame, and isolation. But what science is teaching us is that there is something inherently worthwhile about these human bodies and their ability to develop, delight, and eventually decay within the confines of analog (i.e., continuous) time.

As Martin Heidegger reminds us, death is the “possibility that cancels all of my possibilities;” and yet it is only in accepting the certainty of our own death, decay, and eventual rotting away that life becomes meaningful. In this meaning we find hope; and in our hope, which does not disappoint, we find the capacity to endure.

sen·​tience | \ ˈsen(t)-sh(ē-)ən(t)s (verb) — a biologically endowed, sensual experience of development, delight, and decay played out in the dance of life over time.

“O death, where is thy sting?
O grave, where is thy victory?”

*In my view, an explanation that is a touch condescending considering how cleverly Lamoine has gone about proving his own intelligence.

--

--