Have we lost our minds?

An AI researcher and neuroscientist takes stock

Matthew Botvinick
21 min readMay 30, 2023

It was all over Twitter for a while, as you may remember. A low-ranking engineer at Google, Blake Lemoine, had released a statement expressing his view that Google’s latest chatbot, a system called LaMDA, was “sentient.”

I found myself tracking the ensuing social-media gossip for a couple of days. But aside from the industry politics — Lemoine was soon, and somewhat controversially, fired — the story didn’t make a deep impression on me. Nevertheless, when it surfaced a few days later in the New York Times, I knew that eventually someone was going to ask my opinion about it.

That finally happened toward the end of an otherwise enjoyable dinner party. “Hey, Matt, you’re an AI guy. What do you think about this Google employee who says their AI is conscious?” I answered as planned — of course, the whole thing was silly — and then I changed the subject. But there was something unexpected in my response. My voice had been oddly snappish. My face had flushed. Evidently, although I hadn’t realized it until then, there was something about the Lemoine episode that had really irritated me. Where was that coming from? What, exactly, was bothering me?

Granted, I had found it exasperating that Lemoine’s claims were so vague. What did he mean, after all, when he said LaMDA was “sentient”? To me, the term “sentience” (or “consciousness” or “subjective experience”) refers to something very specific. It’s the first-person awareness of self and world that we have in mind when we say there is “something that it is like” to be me or you. It’s our intimately familiar mode of “being-in-the-world,” a field of awareness that has existence “for itself.” Was Lemoine saying that LaMDA had all of that, or was he using “sentience” to refer to something simpler, like the possession of goals or memories? It was aggravatingly unclear.

And if Lemoine was talking about consciousness in the full sense, how on Earth could he claim to actually know that LaMDA is conscious? One of the most basic hallmarks of consciousness is that it is private. I have access only to my own consciousness. I have no direct access at all to yours. Sure, it seems reasonable for each of us to assume that other human beings are conscious. But if we’re rigorously honest about it, given the radical privacy of consciousness, that assumption is really just that: an assumption. In reality, there is no way for me to know for certain that there is “something that it is like” to be you. In the case of an AI system, where the proposition is infinitely more speculative, how could one possibly know if there is “something that it is like” to be that system? That seems like a question it would be exceedingly difficult to settle.

Clearly, there was nothing in LaMDA’s overt behavior, the things that it said, that would provide a basis for calling it conscious. We know exactly what’s under the hood in systems like LaMDA (including all of the other language models that have arrived since), and how that machinery allows those systems to assemble sentences, answer questions, retrieve information, and all the rest. If everything that LaMDA could do, all of its apparent intelligence, can be explained at that concrete, mechanistic level, then what excuse could there possibly be for bringing “sentience” into the discussion?

It occurred to me that, given the explosive progress in AI over the last several years, that same point could now be applied to a pretty breathtaking range of AI systems. Systems that perceive the visual world, identifying objects and events, and answering detailed questions; systems that generate complex actions in robots; systems that play chess and poker; systems that paint pictures; systems that write code; systems that display memory, attention, reasoning; systems that collaborate and assist; systems that learn, infer, and predict. No matter how impressive the abilities of these AI systems, in each case those abilities can be immediately explained in terms of very concrete physical mechanisms operating under the hood. Consciousness is in no way required.

In fact, it is hard to think of any kind of intelligent behavior for which the story is any different. AI technology has arrived at a point, I think it is fair to say, where we have some inkling of how to build some version of pretty much every aspect of human intelligence. Of course, human-level behavior can’t yet be matched in every case. But it is hard to think of any aspect of intelligence that remains entirely mysterious, any mental function or behavioral ability that we have absolutely no idea how to capture using the current tools of AI. And in no case, across the entire spectrum, is it at all necessary to inject “sentience.”

As I reflected on these points, it would have been reasonable for me to feel delight, even exultation. After all, I had spent the last twenty-plus years, first in academia and then in industry, pursuing exactly this set of breakthroughs, this watershed moment in AI. But something was wrong. There was that feeling of exasperation again. I had drawn closer to its source, but hadn’t yet put my finger on it.

The insight finally came to me one evening as I brushed my teeth. Looking absently at my face in the mirror, my thoughts drifted back to this vexed topic, consciousness. How did all the points I’d been mulling over apply to human consciousness, human behavior, human intelligence? I looked at my arm jostling back and forth. No mystery there. Motor neurons in my spinal cord were doing that, under top-down guidance from motor cortex, and with indirect input from premotor and supplementary motor areas and cerebellum. I looked back at my face. Those eyes in the mirror were feeding information to my lateral geniculate nuclei, and from there to a hierarchy of visual cortical areas, including one in the temporal lobe specialized for face recognition. My whole tooth-brushing routine was guided by habit circuits running through my striatum and by task representations carried by neural activity in my prefrontal cortex. The feeling of the toothbrush in my mouth, that derived from signals running through the trigeminal nerve to my somatosensory cortex and parietal lobe. The taste of the toothpaste, activity in my anterior insula. The inner voice articulating my thoughts, patterns of neural discharge in the temporal pole, Wernicke’s area, medial frontal cortex. No mystery there, either.

In fact, no mystery anywhere. Everything I had been saying about AI also applied here, to me. My goal-directed movements, my sensations, the concepts shaping my thoughts, all could be understood based on current knowledge about concrete, physical mechanisms “under the hood,” in this case the electrical discharges of the neurons in my brain. Perception, motor control, decision-making, attention, memory, language, we now have some inkling of how all of these are implemented in specific neural circuits. Just as in AI, we can understand all of these things, at least coarsely, without needing to appeal to anything so exotic and intangible as consciousness.

That’s when the realization finally came to me. The problem that had been nagging me since reading about Lemoine actually had nothing to do with artificial intelligence. It had to do with human intelligence. It had to do with the human mind.

Like most people, I expect, when I think about ‘my mind’ and what that is, the centerpiece is my conscious experience. The feeling of recognition I get when I look at a friend, recollections of the movie I saw last night, the scene I survey when I introspect, the quicksilver movements of my stream of thought, these are the kinds of things that I associate with ‘my mind,’ and they all arise within consciousness. Consciousness is the theater where they perform.

But something has gone awry in that theater. Something nefarious is going on. As our scientific knowledge has grown, we have realized that all those things that occur in the mind can in fact be understood in terms of physical mechanisms. They may show up as part of conscious experience, but in terms of their causal underpinnings, they are really happening elsewhere. As a result, those performances in the theater of consciousness start to look like they may just be lip-syncing. The music itself is really being played off-stage, in the dark recesses of the brain. The whole show is really driven by causal operations that have nothing inherently to do with consciousness at all.

What then, exactly, is a mind? It seems inadequate to call all of that off-stage machinery ‘my mind.’ An entirely non-conscious mind? How could that not be an oxymoron? But it also seems hard to call my conscious experience ‘my mind,’ if consciousness isn’t itself doing any real mental work. Here was a strange impasse. I still felt stubbornly convinced that I had a mind. But I was suddenly at a loss to say what exactly that is. The concept seemed to have fallen between two stools.

I suppose that to many people these will sound like rather esoteric, philosophical reflections. But for me, when these thoughts first struck, they were very personal and urgent. They represented something like an intellectual midlife crisis, the disintegration of a driving narrative that had underlay a great deal of my life so far. Something fundamental had broken for me, and I needed somehow to put it back together.

The project of understanding the human mind had insinuated itself upon me early in life. Ever since my teens I had been fascinated, maybe even a little bit obsessed, by human consciousness. What exactly was this mysterious centerpiece of my own existence, so intimately familiar and yet at the same time so opaque to my understanding? Clearly it arose somehow from my brain, but how on Earth could that be explained? How did that emergence actually happen? What mechanisms or processes or principles did it involve? Hounded by these questions, I was driven first to medical school, training as a biological psychiatrist. While there I discovered the emerging field of artificial neural networks — the forerunner of today’s deep learning — and followed that lead into post-doctoral training in computational neuroscience and functional brain imaging. Soon thereafter, as a junior professor, I delved also into reinforcement learning, another computational framework destined to be central to AI, which explained how learning and decision-making could be driven by simple numerical ‘reward’ signals. By the time I got tenure, AI researchers had begun to pick up both deep learning and reinforcement learning, showing what these were really capable of when they were scaled up and applied to real-world problems. Excited by this development, I moved from academia into the tech industry. That was in 2015, at the start of what turned out to be the AI revolution. The intervening years had given me a front-row seat to witness AI topple one challenge after another. Video-game play, expert-level go, image generation, natural language processing, computer programming, voice synthesis. One by one, abilities that had seemed unattainable came into reach, transformed from deep mysteries into tractable technical problems.

All of this had been very exciting, very heady, and I had been enjoying the ride and the quest. However, my reflections on the Lemoine affair had now prompted me to look up from the fray. My thoughts strayed back to the motivations that had driven me at the outset, and in particular my youthful fascination with consciousness. How much had I actually learned about consciousness over the intervening decades? All the progress that I had witnessed in neuroscience and AI had certainly shed a great deal of light on things that surface ‘within’ consciousness: perception, memory, decision-making and the rest. But what insights had I gained concerning consciousness itself? I started to feel that sense of exasperation again. The painful truth was that I now knew less about consciousness than when I had originally set out. Neuroscience and AI, despite all of their progress, had taught me almost nothing about what consciousness actually is. But I also now had a troublesome new question to deal with. The lip-syncing question. If everything that surfaces in consciousness is actually rooted elsewhere, and fully explicable in terms that don’t refer to consciousness, then what does consciousness really do? What function does it really play? What, if anything, does it actually contribute to the mind?

I was experiencing these as very personal questions. However, the truth is that I was really reacting to the final steps in a grand historical progression, one that reached back for several centuries. I was sitting at the end of a long, slow series of changes in Western conceptions of the mind, an evolution that had nudged conscious experience, inch by inch, over many years, into the margins.

Up until the Renaissance, no firm distinction had been made between mind and consciousness. Indeed, both were rolled together with another mystery-shrouded abstraction, ‘life,’ with the whole gemisch forming that venerable package, the soul. When the soul entered the body, at some point during fetal gestation, it installed both life and mind. It was said to be like a mariner assuming command of a ship, taking sights, setting course, and making all the rigging work appropriately. Things started to change, however, in the Enlightenment era. Beginning around that time, the soul’s turf started to get nibbled away, bit by bit. First to go was the soul’s responsibility for instilling and sustaining life. As chemistry, physics, and physiology took root and developed, there was eventually no need to reference anything like a ‘vital spirit’ in order to account for organismal functioning. Life was reduced to physical and chemical processes. The soul’s services were no longer needed.

The soul still did have work to do, at least for a while, as the seat of the mind. It wasn’t long, however, before this aspect of the soul started to get whittled down, too. Up until Descartes’ time, the soul was thought not only to house the will, but also to directly cause willed bodily movements. Descartes himself restricted this role, allowing the soul/mind to affect the body only by tweaking the pineal gland. And even then, most actions — all actions, in the case of animals — were driven by reflex, with no dependence whatsoever on mind or soul.

The earliest pioneers of modern psychology, including William James, were still quite seriously interested in subjective experience. Even Freud, despite his talk of an unconscious, left plenty of work for the conscious mind to do. The scenario changed drastically, though, in the first half of the twentieth century, with the rise of behaviorism. The idea here was that, in order to do rigorous science, it was important to consider only publicly available observations. For psychology, this meant focusing on overt behavior. Conscious experience, because it was private rather than public, lay outside the boundaries of respectable science.

Starting in the 1960’s, cognitive psychologists pushed back on behaviorism, arguing that in order to really understand the mind it was necessary to let internal states and processes back into the picture. What they meant by ‘internal,’ however, had nothing to do with conscious experience. Their states and processes were defined instead in computational terms. That is, they were meant to be exactly the kinds of things one could program into a computer. The ingredients of mind were to be understood from a purely functional point of view. What mattered, for a mental operation, was not what might be involved in experiencing it, but rather its causal interrelations with other mental operations. The mind would be said to contain a ‘goal,’ for example, if it contained a representation that influenced decisions and actions in the appropriate way. That’s just what a goal was, in functional terms, neither more nor less. From this point of view, the mind came to be regarded as a collection of functionally defined operations. Talk began to spread of the “computational mind.”

With each step of this historical progression, conscious experience had been pushed into a tighter and tighter corner. By the time I got into the field, talk of consciousness was largely avoided in serious mainstream work on mind and brain. After all, as the behaviorists had taught, consciousness was not suitable material for the scientific method. When scientists did allow themselves to discuss consciousness, it was typically treated as just another ‘function’ of the mind, sitting alongside perception, attention, memory and the rest. By the 1980’s, some scientists and philosophers were taking a more radical step, denying that consciousness really exists at all. For these ‘eliminativists,’ consciousness could be fully reduced to computation, just as life had been reduced to chemistry and physics. There was no longer any need to treat consciousness as a foundation or substrate for the mind. Consciousness, or what we refer to as consciousness, was merely a side-effect of more basic computational operations. Its existence as a separate, independent kind of thing was essentially an illusion, arising from a particular pattern of computation.

As kooky as this eliminativist movement may sound, it was surprisingly difficult to argue against it, given that consciousness had already been emptied out of so much of what it used to contain. For that matter, what use was the concept of ‘mind’ anymore? What did ‘mind’ add to the collection of computational operations under consideration? As the philosopher Gilbert Ryle had argued, maybe ‘mind’ was like ‘life,’ just a superfluous label for a collection of functions that could now be understood in their own much humbler terms.

Here, then, was the completion of the whole historical process. Mind, like life before it, was finally reduced, vitiated. The last traces of the soul were triumphantly swept away. The final and conclusive step was taken in what Friedrich Schiller had called the “disenchantment of the world.”

Viewed in these sweeping historical terms, the whole sequence of events comes across as incontrovertible, a logical step-by-step process of demystification, a progressive shedding of misconception and illusion. To deny any step of it, surely, would be atavistic. Why, then, was I not completely buying it? Maybe it meant I was becoming a flat-Earther, but there was definitely something about the received wisdom that was not sitting right with me. I couldn’t shake the feeling that this gradual dismantling of the mind was a kind of cosmic mugging. Of course, I didn’t believe, as people had during the Renaissance, that the mind was somehow beamed down from the celestial spheres. But in some vague way, nonetheless, I did feel that things had been brought too much down to Earth. They had been made to seem more mundane than they truly are.

The whole situation left me feeling quite rebellious. But what was there to do? Surely, I was simply going through the grieving process, apparently somewhere between phase one (denial) and phase two (anger). As people in mourning sometimes do, I looked for solace by burying myself in books.

And that, unexpectedly, is where things started slowly to pivot. Following some instinct, one presumably related to my rebellious frame of mind, I found myself gravitating toward iconoclasts, toward writers who approached the mind from a contrarian point of view. Perhaps I thought writers of that kind would make for consoling company, as I worked through my irrational denial. What I got from them, however, was something completely different. Rather than helping me quell my impulse toward rebellion, they fueled it, leading me in the end to a new set of insights and an unforeseen redemption.

The earliest glimpses of light came to me as I read my first contrarian, Galen Strawson, and in particular his comments on the eliminativists. Strawson, to my delight, referred to eliminativism as “the Great Silliness.” “What is the silliest claim ever made? The competition is fierce, but I think the answer is easy. Some people have denied the existence of consciousness: conscious experience, the subjective character of experience, the ‘what-it-is-like’ of experience. Next to this denial — I’ll call it ‘the Denial’ — every known religious belief is only a little less sensible than the belief that grass is green.…How could anybody have been led to something so silly as to deny the existence of conscious experience, the only general thing we know for certain exists?”

What a relief to find this respected, if contrarian, philosopher saying out loud what I’d been thinking all along. Still, Strawson’s fusillade helped me only at the margins. The truth is that I had never really taken eliminativism seriously in the first place. My real concern, it turned out, was not that consciousness was an illusion, but that it might be an epiphenomenon. This was, in fact, the general thrust of most modern-day philosophical and scientific work on consciousness, from David Chalmers’s work on “the hard problem,” to Giulio Tononi’s Integrated Information Theory. Whatever the details, consciousness was ultimately portrayed as riding impotently on top of neural computations. Consciousness was there, it existed, but it had no influence on brain function or behavior. There was a causal arrow running from the brain to conscious experience, but no arrow running back the other way, no way for consciousness to impact the physical world at all.

It was much harder to shrug off epiphenomenalism than it had been to shrug off elimanitivism. After all, the basic problem was that there was so little in human intelligence and behavior that seemed to require consciousness. And yet, here again, there was something about the received wisdom that just didn’t sit right. The verdict seemed to be that, although consciousness did exist, it didn’t much matter. Could that really be true?

Here is where a second contrarian writer came to my rescue, another charter member for the underground resistance movement I was now starting to build, if only in my imagination. The Oxford philosopher John Foster had published a book in 1991, The Immaterial Mind, which advanced a defense of old-fashioned Cartesian dualism. That was about as contrarian as you could get in 1991, the absolute heyday of the computational mind. However, what impacted me was not Foster’s argument for dualism, but instead just one little piece of it. Here, Foster calmly presented an absolutely devastating refutation of epiphenomenalism.

To see how Foster’s argument goes, do the following. First, go ahead and introspect for a minute, and check to see if you are consciously aware of the world, that there is “something that it is like” to be you at this moment. If so, raise your hand. If you did raise your hand, congratulations are in order. You have just demonstrated that conscious experience is not an epiphenomenon. Raising your hand required activation of a certain set of neurons in your motor cortex. What is it that triggered this activation? It may not be easy to answer this question exactly, but clearly there was a chain of influence from your initial ‘phenomenal’ observation, something obviously linked to your consciousness, through whatever intermediate mechanisms, to this neural activation. For you to do what you just did, consciousness must be capable of influencing neural activity. It must be capable of influencing the material world. It cannot, in short, be an epiphenomenon.

I think this disproof of epiphenomenalism — Foster lays it out in much more philosophical detail — is one of the most astounding discoveries I’ve ever encountered. Foster should have gotten a Nobel Prize for it. I cannot understand why his book is not better known. And yet, for all that, what is the consequence of Foster’s proof? Consciousness can affect brain function insofar as to allow us to declare, “I am conscious!” This was undeniably a great boon to Rene Descartes and Edmund Husserl. But for the rest of us it does not do much to dispel that lingering concern that consciousness, despite its existence, may not very much matter.

Enter my third contrarian, or rather, in this case, a group of contrarians. I had been hearing for a while about this loose collection of philosophers and activists who were together championing a movement referred to as “effective altruism.” The movement was making many striking claims: Most of one’s income should be given to charity. Animal welfare should be given exactly the same weight as human welfare. We should bear in mind continually the welfare of people who won’t be born for another million years. As I started to read up on EA, I reacted to such assertions as I expect many people do, with a mixture of sympathy and bemusement. As with Foster, though, there turned out to be one small ingredient in the argument that struck me with terrific force.

If you dig down to the roots of EA, one of the most important ideas you eventually get to is sentiocentrism. The claim here, from contemporary philosophers like Peter Singer and David DeGrazia, but already articulated in the 18th century by Jeremy Bentham, is that it is only sentient beings that have moral status. That is, sentient beings are the only things for which it matters morally how we treat them. This, in turn, is because it is only sentient beings that have ‘interests.’ It is only sentient beings that are able to care about how things go, for whom we can meaningfully speak of their welfare. In this sense, sentient beings are the only ones for whom anything really matters.

To make the point concrete, imagine this. In a reckless moment, you have just bitten into one of the world’s hottest chili peppers, a Carolina Reaper. After a second of suspense, your mouth explodes in pain. Let’s analyze this situation. The first thing that happens is that the pepper releases a chemical called capsaicin. This triggers a particular class of nerve fiber, through specific (TRPV1) receptors, sending volleys of electrical activity up through the cranial nerves, through a series of brainstem waystations, and on to a set of cortical areas and some subcortical structures, including the amygdala. As far as we know, the neurons activated in this situation work basically like other neurons. They spike and release neurotransmitters, just like most neurons do. However, their activity does have specific effects, which are determined by their position within the nervous system, and thus their functional relationships with other neurons — functionalism again. Thus, these neurons may trigger motor reactions, like facial contortions, as well as goal-directed decision making, perhaps leading you to reach out for a glass of water. Importantly, they also contribute to learning, through mechanisms like those involved in reinforcement learning. As a consequence, the next time someone offers you a Carolina Reaper, activity in your amygdala may cause you to demur.

But none of this gets at the most important thing about the chili-pepper scenario: your suffering. After all, why do you care about the pain the pepper causes? Naturally, it is because of the suffering involved. This suffering may be related to neural activity — perhaps to some pattern of spiking in the cingulate cortex — but it is not reducible to that. There is no suffering in the neural activity. The suffering inheres in your conscious experience. If you were a zombie, which reacted to eating the pepper with exactly the same neural activity and the same movements, but without any accompanying subjective experience, it would be hard to say that you suffered in any way. It is only because you are sentient that you suffer. And in this sense, it is only because of your sentience that the effects of eating the pepper matter to you.

The take-home message here is not specific to physical pain. Why does it matter if, say, your dog gets hit by a car? Or if your soccer team wins? Or if COVID cases rise in China? These things matter precisely because of the pain or pleasure they cause, whether that is physical or psychic, personal or sympathetic, immediate, eventual, anticipated, or avoided. That pain or pleasure, in turn, is fundamentally an experience, tied to and dependent upon consciousness. To state the bottom-line conclusion, things only matter because of their impact on your conscious experience. Consciousness, whatever else may be true of it, is the only thing we know of that makes other things matter.

William James had put his finger on it back in 1899: “Our judgments concerning the worth of things, big or little, depend on the feelings the things arouse in us….Wherever a process of life communicates an eagerness to him who lives it, there the life becomes genuinely significant…. and there is ‘importance’ in the only real and positive sense in which importance ever anywhere can be.”

Albert Einstein put it more succinctly: “If there were not this inner illumination, the universe would be merely a rubbish heap.”

Consciousness is a thing that makes other things matter. This realization finally captures what, for me, had gotten lost in contemporary perspectives on the mind. Consciousness is not an epiphenomenon, sitting inconsequentially on top of all the rest. It is the part of the mind that confers significance on everything else, the part that enables us to care about the world, our lives, other people. This, finally, is what I had been missing, a satisfying way of putting conscious experience back at the center of the mind, not only in a causal role, but in a role that makes clear its fundamental importance, that restores its dignity.

I suppose the scientific approach to mind was bound to lose track of this side of the story. After all, the job of science is to describe the natural world, not to explain why things matter. On closer inspection, however, this distinction breaks down. Mattering, the subjective experience of caring about things, is part of nature, just as much as consciousness itself is part of nature. Mattering exists, right there alongside space, time, energy and all the rest. We need to pan back so that we are able to see this again. Only then will we be able to fully reclaim our minds.

Another consequence of this slight reorientation of science would be to put human beings, together with other sentient animals, back in the center of the natural world, at least in a way. I realize that sounds heretical. After all, science has worked for centuries to decenter humanity, to make it clear that there is really nothing special or central about human beings, in the grand order of things. I make no apology, however. If mattering is a part of nature, and if we sentient creatures are the only things in nature for whom things matter, then we are legitimately special. Let science acknowledge that. After all, who is doing the science in the first place? And why does science matter?

Of course, as radical as all of this may sound, it does nothing to invalidate the kinds of investigations that already go on in the sciences of mind and brain. There is no argument here against continuing to study how hippocampal circuits support memory, or how dopamine drives learning. Nor would a proper acknowledgement of ‘mattering’ answer most of the outstanding scientific questions surrounding consciousness. Can conscious experience ultimately be brought under the umbrella of physics, or some expanded physics, in such a way that we finally understand how it arises from electrical activity in neurons, or — who honestly knows? — perhaps silicon chips? The existing debate over this question can proceed without interference, and the problem can continue to be studied, as it is at present, by many brilliant and innovative researchers.

Speaking for myself, however, I’m not sure I am as eager as I used to be to have the mystery solved. Perhaps it might be better to let there remain some magic in the world.

--

--