Designing Collaborative AI
Written by Ben Reinhardt and Belmer Negrillo
Belmer and I gave a presentation on Why the Future of AI Depends on Human Computer Collaboration at the AI Summit Conference last week. In short, it was an intro to ways to start thinking about how to design systems where people and AI work together. Here’s our talk in blog form. And just as with the presentation — feel free to ask questions!
Introduction
Most of the perceptions and fears about AI come from the expectation that AI systems will simply replace people.
Another paradigm is one which humans and computers collaborate and take different roles based on what they can do best. Think of it as a symbiosis. I’m always a fan of the term Intelligence Augmentation (IA.)
As with between humans, human-computer collaboration requires the right conditions and shared goals to succeed. And that’s where the design comes in.
The Power of Teams
“ The human does the strategy, the machine does the tactics, and when you put them together you get a world-beater. ”
— Sandy Pentland, Toshiba Professor of Media Arts and Sciences, MIT Media Lab
“ Humans + computers + a better process for working with algorithms will yield better results than either the most talented humans or the most advanced algorithms working in isolation. ”
— Paraphrasing Chess Grand Master Garry Kasparov after a “freestyle chess” competition, in which teams can comprise just computers, just people, or combinations of both.
Garry Kasparov’s defeat by Deep Blue in 1997 in a chess match signaled that computers had outstripped people. However, “centaur teams”—combined computer+human teams — consistently beat fully automated teams. Note the emphasis on the processes for working with algorithms. It’s not enough to put a smart system and a person in the same place with any interface — it requires design to emphasize the strengths of both teammates.
Consider the Evolutionary Path
Here’s a quick overview of the evolution of software from dumb tools to complicated agents, seen through the lens of text editors.
In the beginning, there was the raw text editor. Using this tool, all the tactical work is made by the person, deciding all the aspects of formatting, exactly where the cursor goes and when and what mode the editor is in.
With the introduction of the GUI, the computer started to take care of a few tactical details — what mode you’re in, how to insert tabs to achieve a layout. We started to have a user interface that would provide a combo of actions based on common tasks. Some actions could be performed more naturally, like jumping a line.
With the word processor, the computer moved beyond helping with just content input. It started to observe the content itself, correcting spelling by matching your words with the dictionary. The mismatches appear as suggestions for correction, and also provide easy access to the meaning of individual words.
Grammarly is a modern tool that goes one step beyond. Instead of just word and single point grammar error identification (word level tactics) it make suggestions for phrase level tactics. At the same time, it’s interactive and aimed at making you better at writing with explanations and options.
Recently, imagination has run wild over what “AI” can achieve, creating a gap between expectations and reality. Good design will help bridge that gap. It manages expectations and creates systems where people can seamlessly pick up slack from the AI, making it seem much smarter than it is.
Find the Right Design Pattern
Let’s talk about tactics for designing systems for AI collaboration. The first piece to consider is the design pattern of the interaction. There are three major patterns:
Recommender System — This is the oldest and most reliable design pattern. It also has a long feedback loop and is useful in limited situations. Recommender systems are currently the most common collaborative design pattern. Google search, Amazon product recommendations, and Netflix movie recommendations are all recommender systems.
Decision Maker — This is the first design pattern that normally comes to mind when you think of “AI.” It’s also the one with the highest variance — if it works it’s incredible but if it doesn’t, it’s terrible. Like playing “mother may I” with a machine. Examples single option AI includes Siri, “I’m feeling lucky” on Google, and translation engines. In each case you literally have to start the process over if you get a result you dislike.
Coach/Collaborate — This is the most tightly collaborative design pattern. It’s also the hardest to get right, but when it works it’s magical. The key is that the tighter the feedback loop is to the speed of thought, the more the AI feels like an extension of your own mind. There are few examples of the coach/collaborate design pattern in the wild. One example is the science fiction sentence-completer created by Robin Sloan. Adaptive education programs are another.
Defining Purpose
The next question to ask is “what is the system’s purpose?” Is it to make you do less of something like scheduling or driving, or make you better at something, like writing or running? There’s no right answer.
Identify the Right Level of Automation
A useful framework for designing collaborative systems is thinking about what the ideal level of automation should be. Here is a more in-depth explanation. In short: write out what each level of automation looks like for your specific problem. Then either do a real or mental experiment to figure out which level is best. The counterintuitive piece is that higher levels are not necessarily better — see the example of centaur chess. Start with writing down what level zero looks like followed by what level five looks like to bookend the problem.
How to Use the Framework
Can you draw a box around the system that doesn’t involve a person?
Let’s go back to the example of writing. You could create the system in two different ways. One involves drawing a box around both the editor and the person using it, saying “this is a system for making you great at composing.” The other involves drawing a box only around the editor, saying “this is a system for creating written text.” In the former case, when there is a person in the system, it’s a collaborative system and the ideal level of autonomy is likely 2–4. In the latter, without a person, the ideal level of autonomy is likely 5 and a collaborative system won’t help you solve the problem.
Is there a component of “creativity?”
“Creativity” involves doing something that hasn’t been completely systematized. Which means that it can’t fully be encoded in a cost function and for now (this of course could change at any minute) computers have no “taste.” While they can be trained to create things like pictures and writing that we normally think of as creative, it still needs feedback just like a novice creator. Thus, if your problem involves a component of “creativity”, you want to shoot for one of the autonomy levels that
How complex is the system?
Complex systems have many inputs, nonlinearities, and unforeseen interactions. This makes it hard to capture their full behavior using current machine learning algorithms. In these cases, performance often goes way up if you have a feedback loop between a person and the algorithm.
An Exercise
The design exercise is to take an AI task and sketch out what each level of automation looks like. Here’s the worksheet! https://www.dropbox.com/s/8dj6gje4o9m6ty9/AI%20Summit%20Worksheet.pdf?dl=0
Takeaways
- Collaborative AI systems can be more powerful than their components
- Collaborative AI systems are tricky to design
- Feedback loops are important!
- Break the problem down into levels of automation
- Full Automation is not always desirable
Image credits
- TekRevue, www.robinsloan.com, Wikipedia, Chris Penner, askdavetaylor.com, mashable.com, shutterstock.com, map.com, npr.org, dailydot.com
References
Why humans and computers think better together