Is the unpredictability of AI a feature or a bug?

Kentaro Toyama
AI Heresy
Published in
4 min readOct 17, 2023

--

DALL-E 3 image: Render of a futuristic robot with glowing circuits, its facial screen displaying puzzled emoticons, representing confusion

Just days after Microsoft released its AI-powered Bing search-engine-cum-chatbot to select users in February, problems surfaced. One case as reported by Kevin Roose in the New York Times involved an emoji-filled exchange in which Bing professed its love for Roose, insisted he wasn’t in love with his wife, and asked him to leave her for it. Roose was shaken by the creepy “conversation,” as most people would be. Something else about that incident, though, should disturb us all far more: that the creators of the system didn’t anticipate the problem. Today’s AI is wildly unpredictable.

With most classes of commercial goods, both producers and consumers want predictability. But, what if with AI, the lack of predictability is a feature, not a bug?

Consider that we’re used to unpredictability in the only serious intelligence we’ve known of so far — human intelligence. When we encounter people we believe smarter than ourselves, we expect them to have insights that we can’t fathom. We say that their ideas go “straight over our heads.” Genius is unpredictable.

But so is garden-variety, everyday intelligence. My wife and I often play out the following comical spat: I take some candid photographs of her, and show her one I like. She frowns at how she looks in it, and insists on viewing all the shots. Often, she finds a different photo to be “much better,” and mutters about my sensibilities. Over the years, I’ve tried to figure out her criteria — Is it something about the fall of her hair? The lighting on her face? Some subtle aspect of her expression? All to no avail. She’s acting on some aesthetic intelligence, but not one I can understand. We’re unpredictable to each other.

It’s not that we accept unpredictability as a pesky side-effect of intelligence. We even expect unpredictability from intelligence. Imagine that I asked you to compose a haiku about snow on Mt. Fuji on ten different occasions, each time with the same prompt. I’d expect, first, that they wouldn’t all be the same haiku, and second, that some might surprise me. (Note: When I did this with ChatGPT, it met these expectations.) We tend to look down on rote performance. In fact, we call overly predictable behavior “robotic,” exactly because early robots weren’t so intelligent. We expect intelligence to be creative, and creativity is unpredictable.

For many decades among AI scientists, we had a joke that AI was unachievable because each time we built a computer system to accomplish an AI goal, we’d no longer consider it intelligence. In the 1970s, having a computer beat grandmasters at chess seemed like something requiring real intelligence. Then, IBM’s Deep Blue defeated world champion Garry Kasparov, and computer chess stopped being AI. Once grammar checking became a standard feature of word-processing software, parsing language ceased to be AI. Now that Facebook tags our photos automatically, face recognition is no longer AI. Once we understand a cognitive task well enough to write algorithms to perform them, the magical aura of “intelligence” evaporates.

But, today’s AI makes that joke obsolete. Even the most begrudging critics of the state of the art acknowledge that modern AI can surprise them. Or maybe, it’s the other way around — maybe we’re finally willing to call AI intelligent, because it surprises us.

AI scientists and critics often lament that today’s AI is not “interpretable.” An AI system’s responses and behaviors should be explainable in a way that people can understand. Why did Bing’s conversation with Roose go haywire? No one really knows. Research scientists are working on the interpretability problem as we speak, and I’d guess that more about how systems like ChatGPT “reason” will become transparent to us within years. But, I’d also wager that thorough interpretability is beyond us, for two reasons. First, modern AI is staggeringly complex. ChatGPT 3.0 is known to have had 175 billion parameters — that’s 175 billion numbers whose values decide, in complex combination, how it responds. Could any person really understand what that means? Some things might be beyond human grasping, in the same way that calculus is beyond dogs.

Second, we will soon discover that un-interpretability, and unpredictability, is a feature that users actually want. Not in every task — maybe not in an online search and not in the responses from your central Internet-of-things home computer. But, say, for knowledge workers using AI to assist their creative work, what user wouldn’t want over-their-heads smarts, and therefore, unpredictability? What’s the point of AI, if it can’t propose ideas we couldn’t think up ourselves? Once tech companies realize that users see certain kinds of unpredictability as a feature of AI, that will be the end of any hope of predictability.

--

--

Kentaro Toyama
AI Heresy

W. K. Kellogg Professor, Univ. of Michigan School of Information; author, Geek Heresy; fellow, Dalai Lama Center for Ethics & Transformative Values, MIT.