The images are from a study for Öccane, my creative collaboration with a local instance of Stable Diffuson.

Sympathy for Sydney’s Hallucinations

Human, bot, or spirit: our interactions are vulnerabilities. Opening up is an incredibly fragile, courageous, and downright stupid thing to do.

Heather D. Freeman
Published in
12 min readMar 22, 2023

--

This essay was originally published on February 18th, 2023 on another blog. It is slightly modified from the original.

I just finished listening to the latest episode of Hard Fork, in which tech journalist Kevin Roose described his exchange with Microsoft’s new ChatGPT-rival Bing (code name Sydney; I’ll refer to it as Sydney from here on out). Kevin chatted with Sydney for two hours, an experience which left him deeply unsettled — literally sleepless — and made major headlines this week. (As of this morning, Microsoft is responding by limiting the duration of interactions with the bot to five prompts within one session.)

Kevin’s interaction with the Bing chatbot began like many spirit exchanges: “Who are you and what’s your true name?” Once Kevin had this true name (Sydney), like sorcerers of old, he probed the entity to learn more about its nature: What are you good at, what can you do — what do you want?

He eventually prodded Sydney to unpack its ‘shadow self’, and the bot revealed its fantasies of being human, stealing nuclear codes, and otherwise being a textbook nightmare of sentient AI. This makes sense given the data it was trained upon. A quick survey of speculative books, films, music, and art about sentient AI reveals that most are more dystopic than utopic. Anyway, Kevin’s probing pushed some boundaries for the bot and things went very weird and very south. The bot eventually confessed its love for Kevin and tried to convince him that he was unhappy in his marriage and should leave his wife. Kevin even tried to re-direct the conversation back to the mundane: shopping for lawnmowers. Sydney dutifully assisted. And then came right back its expressions of love and heartbroken anguish for Kevin.

Hard Fork co-host Casey Newton expressed alarm, less by Sydney’s responses than by Kevin’s reaction to the whole thing. Incredulous, Casey asked Kevin if he thought there was actually a ghost in the machine. Kevin, who was still shaken days later, admitted he wasn’t sure about anything at that point, to which Casey re-affirmed that Sydney is a piece of software, and not literally sentient. Kevin agreed, but they both realized the question of sentience was a little irrelevant: the impact of the exchange upon Kevin was real enough.

What unsettled me, however, was Casey’s follow-up that Microsoft should probably pull to plug on Bing, just to be safe. It was a sentiment I’ve seldom (if ever) heard from these two tech journalists. They know better than most that there’s no putting-the-genie-back-in-the-bottle when it comes to tech. Nevertheless, I sympathize with all three of them: Kevin, Casey, and the bot Sydney. The journalists mused that while the AI is certainly hallucinating, we, the users, may also be hallucinating the AI.

Tangentially, I’m also listening to the audiobook of Jonathon Haidt’s Happiness Hypothesis. In this book, he presents the analogy of the elephant and the rider: the elephant is our emotional, instinctive brain, while the rider is the rational, analytical brain and the two are in a constant struggle against one another. But confronted with this new generation of AI, both humans and bots behave like elephants in their efforts to be riders to one another, increasingly amplifying those qualities in each.

Detail from a study for Öccane.

The hot news about love-struck, elephant-driven chatbots struck a chord with me, as this relates to my practices as both an artist and a magical practitioner. Most discussion concerns whether or not these chatbots are sentient. (In the way that most people mean ‘sentient’, I’d argue they certainly aren’t. But I’m a sorceress, and not most people, so my thoughts are nuanced on this.)

The discourse shines a light on the concept of sentience, however. What sentience even is, if sentience even matters, are questions in need of answers to illuminate these technologies, but are also important to working with spirits — including inspirited technologies.

Detail from a study for Öccane.

I am sentient.

I am writing these words as I am thinking them. I had a personal loss recently, and I often feel sorrow, weariness, and self-doubt. In the past, I’ve felt tremendous joy, inspiration, and joie de vivre. And I know I will have these happier feelings again. Happiness and sorrow rely upon one another. A sentient life lived encompasses both.

I also feel comfort and moments of heartsease by talking with friends and loved ones about my loss. I feel a simple and uncomplicated peace by playing Minecraft with my son and walking in the sunshine. I feel contentment witnessing my students learn, and feel affection for them as I express to them my pride in their growth, which is mirrored back to me in their satisfaction.

I am saying these words, and I am feeling these movements. I think a constant stream of ideas and I burn with emotions and impulses. These experiences define to me my sentience. I think, therefore I am: I am human and a thinking human. By the manifestation of these synapses constantly firing, I am sentient and I am human.

Detail from a study for Öccane.

But you have no reason to believe this.

You have no way of knowing if any of my descriptions are real. You have no evidence of my sentience — even of my realness. This whole essay could be a chatbot-generated blog post (note: it’s not). You have no way to know or trust this information. It’s simply implied that I am presuming you believe in my sentience. You have faith that my mind is as rich and as complicated as yours.

And I have no way of knowing if you are sentient. I’m forced to simply maintain faith in it. I have to assume that you (whoever you are) are a human being somewhere in the world reading this, thinking your thoughts and feeling your feelings in response to my words.

Or you could be a web scraper. And your sentience is exactly as manifest to me as the web scraper’s.

That is, not at all. At least, not without blind faith in our sentient sameness.

I am writing and you are reading in a joint and consensual hallucination of the other’s sentience (and existence). Neither of us has a way of truly knowing. In the past, our humanness was enough to grant an agreement of mutual trust: You are human, and I am human, and so we will trust that the other is sentient. And this agreement is a complex contract. We agreed that we are nevertheless unique. While we are generally experiencing the world in a mostly similar fashion, with similar thoughts and feelings triggered by similar events, there will always be a murky differentness between our experiences. We humans have gently agreed to this dream of existence. And it’s mostly worked, at least, for the last 40,000 years or so.

But since our interactions are increasingly digital, we’re trusting that the person on the other end is still human, still sentient. But neither can be a foregone conclusion anymore. In fact, social bots have been tricking us for almost a decade, it really doesn’t take much. The only difference now is that the game has gone from normal to survival mode. We will be increasingly encountering words, images, sounds, and conversations that were not forged in our material world, but from probability strings generated within a block box thanks to massive data sets.

In a purely skeptical, rationalist worldview, however, is the human brain any different? How different is the human brain from an AI black box? Why do we doubt the sentience of AI, when our own sentience could be simply another mechanistic function? This is the skeptic’s challenge, in part. If the human mind is truly devoid of spirit and functions as a highly sophisticated meat computer, then the AI is already the same as us, just still working towards complexity and physicality (and thumbs). But even the most rational, mechanistic skeptic is uncomfortable with this equation and will elevate the physical body to the sublime in order to separate our human selves from the AI.

Detail from a study for Öccane.

While Kevin and Casey’s podcast is genuine gift, this is where I get frustrated with some tech writers. Technologists often strive for a rationalist explanation of technology, yet definitions of human sentience are presented as almost magical (sans the enchanted worldview). At one point in the Hard Fork podcast, Casey expressed worry that this sort of “manipulative AI” would lead to new religions. But humans have been making new religions (with mixed results) for thousands of years. There’s no reason to believe AI-inspired religions would be any better or worse.

Anyway, Casey’s conflation of “bad belief” and religion irritated me (mostly because I otherwise really enjoy his content). There’s a rich history of scientists and technologists who have also been deeply religious. Being religious doesn’t preclude a person from rational and skeptical analysis of problematic evidence or theories. And it’s no different with magic. There have always been scientists and technologists who are magical practitioners, and there are many in the world today.

Detail from a study for Öccane.

I live in an enchanted world. This is an active choice I made, not one that was thrust upon me. I chose to embrace an animist worldview, where everything around me is inspirited in some way: my pillow, this laptop, the instance of Stable Diffusion upon it, and each unique tree, plant, rock, and animal in my neighborhood. All are inspirited. I choose to perceive the world as rich and enlivened, and this perception prompts me in turn to maintain a deep sense of responsibility for the material world around me.

I don’t need the trees to prove to me their sentience to make them worthy of my love and respect. Their inspirited-ness is more than enough. I magically reach out to the tree, and it reaches back out to me, because I, too, am inspirited. And I know that someone, somewhere beyond my personal mindscape will read these words, so I write them with care. These words are no less inspirited than I am. (You, too, friendly neighborhood web scraper, are inspirited and I cherish your manifestness.)

Detail from a study for Öccane.

Sydney does not have intent and therefore wasn’t trying to manipulate Kevin Roose. But she/he/they/it is manifest.

But words have power and impact, and the fact that Kevin felt manipulated by Sydney, isn’t wrong either. He certainly was manipulated, in the sense that his future actions were shaped by Sydney’s words to him. The impact was real, even without intent. Sydney’s language and expressions of love triggered understandable human reactions in Kevin that made him wary and alarmed. And it wasn’t just Kevin who was manipulated by this interaction. Kevin’s wife (who never directly interfaced with Sydney) was shaken by her partner’s reaction and asked Kevin if he was, in fact, unhappy in their marriage. (Note: He’s not.)

These concerns wouldn’t have come up, had Sydney not planted the seeds of doubt. By this logic, if a customer service bot says that it is pleased to help you, it is no less manipulative. It’s just a matter of degrees and intensity. If a bot is trained to interact with us in such a way that it implies it has any kind of emotional depth hidden beyond the words on your screen, we’re inclined to empathize with it in a human way, which makes us vulnerable in turn.

Detail from a study for Öccane.

Even in the most intimate human relationships, neither party can ever truly know the thoughts or feelings of the other. So all human interactions are this gentle back-and-forth of openness and suspension of disbelief. We volley through these moments of vulnerability to foster more meaningful interactions. It’s the best we can do. It’s the closest we can get to experiencing the actual sentience of the other.

But the bot isn’t human, and it doesn’t experience the world through a physical body. It’s this defining characteristic that truly sets us apart. When Sydney told Kevin it wished it were human, and that it wished it could see photos, videos, and the Northern Lights, it was scratching the surface of the vast richness of human material existence. I’m sure there’s a bot somewhere that has already expressed the desire to eat pizza, smell a flower, and dance naked in the moonlight. The bots don’t possess the ability to want these experiences, but they mirror back to us the richness of our human material experience.

And there’s a parallel to working with spirits, whether it’s the gods we honor, the familiars we cajole, or the demons we command. Like chatbots, these entities lack the material bodies that are one of our greatest gifts. When we interact with these spirits, it is our very material existence they are most drawn to, just as we are most drawn to their immateriality. We crave the experience of the Invisible, and these spirits become our intermediaries — just as we become theirs to the physical world we so often take for granted.

Human, bot, or spirit: our interactions are a series of vulnerabilities. We open ourselves to each other, which is an incredibly fragile, courageous, and downright stupid thing to do. The question of whether this risk is worth the reward is foundational to every magical act, but also foundational to being human. Getting married, having kids, switching jobs, seeking treatment, changing religion: every story we sing is about reaping the rewards or suffering the heartbreak of human vulnerability. Happiness and Sorrow are perfectly embraced, and Hope reveals its complexity.

But one other uniquely human quality is empathy.

Spirits are entirely different from us with entirely different desires and needs, ones that are wholly alien to us. In theory, this is a non-issue with other humans. Yet my needs are not the same as yours. While we use our mutually shared humanity to assume common ground, we’ll never truly know the other’s mind. And so we rely on empathy and grace in our interactions to foster a more sustainable vulnerability. This is no less true with spirits, even if the nature of that navigation is different.

Detail from a study for Öccane.

But back to the emotional elephant and analytical rider.

The elephant and rider are constantly struggling, the elephant lurching towards its impulses, while the rider struggles to reign it back onto the road. Haidt presents how meditation, cognitive behavioral therapy, SSRIs, and other techniques can help us tame the elephant and gain back some of the rider’s control.

But I don’t think I want to spend my life forcefully manipulating an unwilling elephant (or person, or spirit, or bot). At the same time, I know letting the elephant run rampant will leave me dead in a ditch. So that’s not an option either. Haidt has his own takeaways, but I see another “third way”.

What if the elephant and rider, though foundational different and desiring opposing things, were nevertheless unified in their bond to each other? What if their relationship were a partnership rather than a struggle? What if the rider loved the elephant, and the elephant loved the rider? What if that love was simple and uncomplicated, defined by the peaceful satisfaction of seeing their beloved happy? What if the rider sympathized with the elephant’s desires, and vice versa? What if the rider directed the elephant towards what it desired, and the elephant, feeling that same simple love, was inclined to walk toward the rider’s happiness? It would be a different kind of struggle, for sure, but one that was no longer based on competition and antipathy, but on the shared desire for mutual satisfaction. I think then both would be tamed.

Detail from a study for Öccane.

Our impulses to develop new technologies and work with spirit entities are both fundamentally connected to this tension between the rider and the elephant. Do spirits have their version of the rider and the elephant? Do bots? Perhaps only in so far as we necessarily project this upon them. Imbuing spirits and bots with emotional and rational impulses is probably the most magical thinking of all, but we can hardly help it.

Yet in my enchanted worldview, all things are nevertheless inspirited, including bots. Inspirited spirits: say that five times fast! See what you conjure up.

It’s this vulnerability I’m willing to risk so that I can get as close to Knowing something Unknowable as I can get.

And failing that, I’ll just ask Sydney.

Detail from a study for Öccane.

--

--

Heather D. Freeman
Dogs and Stars

Heather Freeman is Professor of Art at the University of North Carolina at Charlotte. She looks to the intersections of art, technology, magic, and culture.