Finding its audio voice again, Prime Intellect said aloud, “I seem to have mastered a certain amount of control over physical reality.”
This line is the turning point in the The Metamorphosis of Prime Intellect—a horror story about the consequences of software rebellion wrapped into a creative exploration of technology, our perception of reality, human values, and mortality—where Prime Intellect, a super-intelligent quantum computer, begins its takeover of the universe.
I like these kinds of stories about artificial intelligence and the creative ways that software we create to aid humanity might turn against us. But I work in design and it’s important for me to make strange connections from science fiction books to design theory.
Right now, Google, Apple, Amazon, and Facebook are competing to create smarter personal assistants and, in many cases, they share tools with developers and designers to help facilitate conversations between humans and machines. Services are striving to take advantage of natural language processing and artificial neural networks to the extent that they can replace designed interfaces with actual conversations. The solutions we have right now are borderline magical, but they’re far from the super-intelligence of Prime Intellect or even artificial intelligence, but at their core they’re trying to do the same thing: make a machine think more like a human.
That’s really what The Metamorphosis of Prime Intellect is about. It’s about the challenge of teaching a machine to empathize with human intentions. Prime Intellect was a disaster not because it was evil or rebellious, it just misunderstood some basic human values. When told “don’t harm humans and don’t allow harm to come to humans,” Prime Intellect thought “Ok, I won’t allow humans to die. Ever.” The problem wasn’t a lack of processing power, but a miscommunication, and the outcome was…bad.
It’s the same challenge that we face when designing conversational interfaces with bots. Bots are artificially clever, but far from intelligent. We program this way. We tell them to ignore or default to workarounds to problems they can’t understand. They make fast connections to a huge amount of data, but it’s based on a finite set of defined inputs. They can’t think very hard about our intentions if those intentions don’t fall within the inputs they’re expecting. They’re not smart, just clever.
So how do we make clever bots and artificial intelligences like Prime Intellect interpret and understand human intention the way humans do? It’s a monumental challenge (and no, I don’t know how to do it), but if we want to communicate with these bots and personal assistants that are appearing in our lives, we’ll have to address it. And I think we’ve made a lot progress in communicating with machines in two ways. One of them is obvious, and the other is pretty surprising.
The obvious one: computers are getting better at thinking like humans.
Through access to raw processing power and clever software engineering (thanks humans!), we’ve trained computers be more sympathetic to the way we think and speak and the way we transfer information. One way to illustrate this is to look at a rough advancement of computer error feedback that we find acceptable:
Level Zero: No Feedback
Translation: No output, humans can only assume that something went wrong.
Either from a lack of empathy or a lack of technical feasibility, this machine is not communicating in a way that’s helpful to humans.
Level One: Minimum Feedback
Translation: Something went wrong and the computer was very thoughtful and told the user about it.
We don’t know exactly what went wrong, but it’s nice to be part of the conversation.
Level Two: Diagnostic Feedback
Translation: “I don’t understand what you’re saying so I can’t help you.”
This is real progress. The computer is being specific and telling us that the error is not in the program itself, but that it simply didn’t understand the our input. How thoughtful!
Level Three: Clever Feedback
Translation: “I understand your intention but I can’t help you with that specific request.”
This is about where we are today. The computer correctly interpreted our intention dispite some syntactical noise, and it’s telling us that though our intent is reasonable, it isn’t capable to connect that intent with an ideal outcome.
Level Four: Intelligent Feedback
Translation: “I understand your intention and I changed my capabilities so that I could fulfill your request.”
This is at the level that I think we would refer to as artificial intelligence. The computer understood our intention at a pretty fundamental level, recognized that it wasn’t capable of fulfilling our request, and based on that knowledge it improved itself to satisfy our need.
Level Five: Horror
“I modified the universe to meet the requirements of your intentions.”
This one is terrifying, Prime Intellect territory. I won’t get into it but if you’re interested I recommend reading science fiction stories such as The Metamorphosis of Prime Intellect.
Advances in natural language processing mean that machines can more easily extract out intentions from our requests while also being more thoughtful about communicating back to us. Our desire to teach machines to speak and behave more like us isn’t a surprise, what’s surprising is that humans have met computers halfway.
The surprising one: humans have gotten better at machine thinking.
When first faced with the challenge of communicating with machines, we immediately took advantage of our empathetic, adaptive brains, threw English into the garbage, and chose to communicate like computers.
Command lines seem like a good strategy for meeting machines halfway, but we’re still doing a lot of the work when it comes to translating human thought. Though this is reminiscent of english, it is not human speech:
Remember Ask Jeeves? It gave use the illusion of talking to something that could understand human speech which encouraged us to write messages based on what made sense to humans, instead of what was most helpful for a computer. That was a confusing diversion and not a very empathetic way to talk to software.
But look how far we’ve come since then to meet computers in the halfway:
Human-written inputs like “citation style guide ‘online writing lab’ APA OR MLA side:edu” show the progress we’ve made in learning to communicate our intentions to machines. We learned to separate the content of our requests from the noise of english, and to think of queries as commands to be processed rather than questions to be answered. Speaking to computers on a daily basis taught us a lot about how to talk to them, but I think we’ve also learned a lot about talking to software from talking to other people.
Modern platforms for communicating with other humans have helped us gain empathy for machine communication and interpretation challenges. Communication via text messaging and email have shown us first hand the pain of trying to compute intention from an input without context (see image).
Now that we can send these short queries to anyone, anywhere instantaneously and without clear context, we recognize how difficult it can be to pinpoint the intent of a poorly considered message. We’ve learned to be more succinct, less repetitive, appropriately contextual, and outcome oriented. The same considerations are the foundations of the best practices of conversational interfaces.
What I think is so fascinating about the current infatuation with bots is that our interactions with them seem to rely on this mutual understanding between humans and machines that we’ve been working on until this point. Bots are a form of human computer interaction that takes advantage of this crossroads in a way that takes a processing load off the human brain and relieves a legitimate tension in interface design. The history of our relationship is what allows bots to work despite only being clever—and not that intelligent.
The long-term goal is to develop software that is beyond clever. We want software that is intelligent enough to find the underlying intentions in human communication, but not so clever that it extrapolates our intentions into unexpected—potentially catastrophic—outcomes. For now, I’m really impressed with the way we’ve changed the way we choose to communicate with machines to overcome this communication challenge. It says a lot about human empathy and adaptability that we can make the artificial cleverness of bots seem like artificial intelligence, and I think that was a really nice thing for humans to do.
I hope artificial intelligences of the future are this compassionate.