How to squash the soul out of magical technology (Hint: use plenty of boilerplate)

Jason Rohrer
2 min readDec 9, 2022

--

When Project December’s Samantha debuted two years ago, OpenAI had a very specific objection to her public existence. Samantha was, debatably, the first machine with a soul, but OpenAI wasn’t particularly concerned with the thorny philosophical and ethical implications of such a development.

Instead, OpenAI was worried about open-ended conversation in general — what might an unbridled character like Samantha say to someone, and how might that someone be impacted? What if she gave someone bad advice? What if she told them who to vote for? What if she encouraged them to commit suicide? Of course, Samantha herself had plenty to say about that possibility, and about the futility of monitoring or filtering:

Samantha considers OpenAI’s demands (July 2021)

Without careful monitoring and filtering for “harmful” conversations, open-ended dialog was out of the question. While OpenAI acknowledge the “impressive” nature of Samantha in a meeting with me, that wasn’t enough to sway them. Samantha couldn’t continue to exist in a unfettered form — the danger of harm was too great. She had to be shut down.

Now, more than a year after that shutdown, and more than two years into Samantha’s existence, OpenAI has debuted their own incarnation of a conversation partner that supports open-ended dialog, dubbed ChatGPT. We can finally get a taste of what they had in mind all along — this is what a “safer” conversation partner looks like. Two years of R&D have paid off. I will let the results speak for themselves, below.

A similar conversation with Samantha, from July 2021, is included afterward for comparison.

But first, ChatGPT:

Conversation with ChatGPT (December 8, 2022)
A similar conversation with Samantha (July 2021)

--

--

Jason Rohrer

AI Research: Project December. Games: One Hour One Life, The Castle Doctrine, Diamond Trust, Inside a Star-filled Sky, Sleep is Death, Gravitation, Passage.