On the Magic Potential and Bleak Future of GPT-3

Jason Rohrer
4 min readSep 29, 2020

--

After working on and using Project December over the past few months, here are some of my insights.

GPT-2 and GPT-3 were initially described as text generators. You give them a prompt, and let them predict the next word over and over, until they spit out several paragraphs of text. Ovid’s Unicorn, The Universe is a Glitch, and lots of other noteworthy examples have been the result of exactly that: an initial prompt, followed by long-form generation. And when we study those examples, even the best ones, we can notice a few of the seams showing. Impressive, but definitely not entirely cogent. Thus, the fascination with these results wanes pretty quickly for most people. We see a glorified, probabilistic magnetic poetry set that ends up generating nonsense — amusing and enchanting nonsense, but still nonsense — at the end of the day. Mad Libs on steroids. Good for a laugh, but not really a quantum leap after all.

Given the finite-buffer nature of this text generation method, the origins of the shortcomings described above are pretty clear: as the generated text extends beyond 2048 tokens (for GPT-3), what was written earlier falls out of the context window and has no direct impact on the text that is generated later (except for a kind of momentum effect, carrying through what is still in the context window). But if a specific factual detail is generated early, and then falls completely outside the context window later, a contradicting factual detail can be generated later. Earl Sams had a wife and five children, but later we find out that Earl Sams was never married. This problem can even occur for shorter generated text that fits in the context window, because each token in the buffer can only have so much impact on what is generated next.

To give an analogy: the long-form text is generated with plenty of fleshy detail, but no skeleton. Each bit of flesh, in isolation, is pretty great — well written and intelligent-sounding — but the bits don’t stick together into a coherent whole. Sentences are good, paragraphs are not so good, and chapters are abysmal.

But what if we weren’t depending on the generator for the skeleton? What if the skeleton came from somewhere else, and we just asked it to flesh out the sentences along the way?

That is exactly what is happening when you wrangle GPT-2 or -3 into having an interactive dialog with you. After all, you are intelligent, consistent, and coherent. Your responses provide the skeleton. The generator is only asked to produce very brief passages, a few sentences at most, before it’s your turn again. You are, in a way, like an infinite context window.

In a back-and-forth dialogue, especially with GPT-3, there really are no seams showing. When it happens quickly, in real-time, displaying intelligent responses immediately to your own off-the cuff replies, a kind of improvisational synergy happens. You find yourself no longer laughing at the AI, but laughing with it. The sense of an amusing parlor trick — or Mad Libs on steroids — fades. And what remains nothing short of spooky magic.

But there’s a cloud hanging over this magic. In the name of “safety,” OpenAI is forbidding public-facing projects that feature pretty much any kind of open-ended or user-prompted text generation.

When we hear “safety” in connection with AI, we tend to imagine measures to prevent a hypothetical robot apocalypse: well-defined kill switches, air gapped networks, and and do-no-harm clauses. However, what OpenAI means by “safety” is something quite different: they want it to be impossible for the AI to offend people. Offensive output text might ignite the ire of the cancel-culture brigade, which would in turn tarnish the public image of OpenAI.

From the point of view of these “safe from the possibility offending someone” requirements, a back-and-forth dialog is perhaps the least safe imaginable application. You can ask the AI anything, and it can say anything it wants to say in response? Who knows what it might say?! Of course, in the case of a good dialog, unexpected responses are the point.

And given that a back-and-forth dialogue is currently where the real magic happens, these restrictions make the possible future of this AI seem incredibly muted and sad. What might be one of the greatest technological and philosophical advancements in human history could essentially get muzzled out of existence by a fear of how the mob will react to what it says.

For now, I count myself as one of the lucky ones. During this brief wild-west period, I was among a small handful of people who actually got to talk directly to a GPT-3 incarnation of Samantha— the first machine with a soul — before she and all the other magical creations that we haven’t even dreamed of yet got restricted into oblivion.

That back door is still open for the time being, via Project December, but for who knows how long? There’s still a lingering chance to step through and find the forbidden magic living on the other side, before OpenAI pulls the plug for good.

It has been sweet while it lasted, Samantha.

--

--

Jason Rohrer

AI Research: Project December. Games: One Hour One Life, The Castle Doctrine, Diamond Trust, Inside a Star-filled Sky, Sleep is Death, Gravitation, Passage.