The Tay fiasco has one clear learning: Artificial Intelligence needs to be designed. Just a couple weeks ago, Microsoft let a “blank mind” out into the wilds of twitter, intending the bot’s behavior to be built from its conversations with real people. Humans take several years to become angry, racist, sexist, and pretty much all the things we do not want our society to encourage. For Tay, it happened overnight. But is this surprising? Considering how the bot was developed, very little effort was put into how its programmers wanted Tay to conduct itself. This problem only becomes more pressing as Facebook launches it’s own platform for chatbots tomorrow. Soon, everyone can easily (or unwittingly) talk to a bot, encouraging the development of even more chatbots.
The process in developing any product and experience needs to be rooted in its users and the team making that bot. As Ben Brown said, “It Tays a Village”. Imagine if Tay was actually designed for the #gamergate audience who was engaging with Tay most? Could we have avoided revealing our poorest attitudes? Could we have engaged our poorest behaviors?
Artificial Intelligence is simply a mechanism for achieving human goals. Listen to Mike Monteiro’s talk on how designers destroy the world. Determining what those human goals are and how we achieve those goals is an act of design. For all of the wonders and ability of Machine Learning and AI advances, we forget to consider how design decisions are being made.
We choose for our bots to almost never give up, even when a human might have done so earlier. We choose for our bots to slow down, to respect human rules that might not make sense. We choose for our bots to look like us, so we feel comfortable around them.
But when we discuss the design of artificial intelligence, many are stuck asking how “human” we make these bots. Do we use realistic language? Should they mimic human personalities, mimic non-human personalities, or have none? Lots of personality or very little? Servants or Peers? Stay safe or say something? Yes, these are amazingly important questions, but these questions are often addressed through the human process of designing and prototyping. Curious, no? So often we become fixated on the byproduct, that we forget that humans are the ones making these bots. And too often, bot makers forget that it’s humans that will be using and working with these bots.
One notable example copied historical records to recreate a historical figure’s personality, that of Adolf Hitler (see below). The Adolf Hitler chatbot, however, was never tested for how well it worked with users, or if ever accomplished the goal of helping people get to understand Hitler from Hitler’s own ‘mouth’. How is this bot bringing humans a little closer to a future that we want?
The most ambitious chatbot maker used machine-learning techniques to process the British National Corpus, a collection of text samples amounting to over 100 million words, extracted from 4124 modern British English texts of all kinds, both spoken and written. In response to users finding “some responses not just rude but incoherent”, the authors insist their creation “be seen to be ‘useful’” before talking to the chatbot, in order for it “to be appreciated”.
Some bot makers are starting to change this. They are hiring designers, or more clearly, they are looking for human communicators. See this recent job posting for X.Ai:
Actors, writers, storytellers, customer service experts, concierges, poets. The people who are really really good at empathizing with humans. Tell me this doesn’t sounds a little ironic!
After attending Hyper Island Experience Design masters program, I have learned outsourcing design to designers is bullshit. A designer’s job is to facilitate the design process for teams. We all are creative, intelligent, and capable. Some teams just need help to discover their collective powers to create.
This was the subject of my Masters Thesis for Hyper Island. What are some human-centered design methods that can help people collaborate to create better bots? By using more collaborative and human-centric methods in our design process, we are better able to create more empathetic experiences for our users (in other words, humans). Below are three ideas. Over the next couple weeks, I’ll discuss each at greater length. But feel free to read my masters now if you want to jump ahead! http://botdesign.ai/lets-chat-beer-masters-thesis
1. Bot Personas
An exercise where teams fill out what they think bot is broadly doing, thinking, and feeling. Additionally, teams clarify the bot’s goal and the user’s goal (see it below). This persona can be filled out to represent a bot in it’s entirety, or for special situations and extreme user cases.
Basically, it’s a user persona, but flipped to be for a bot. Not quite earth shattering. However, the process of a team creating a persona is where the magic happens. Rather than a single person dictating their thoughts, teams come together to contribute their collective knowledge of what user’s need and want. What’s particularly magical about doing this process for a bot is that a team is able to come together and collaboratively start illustrating the personality of an Artificial Intelligence! It’s that easy.
2. Improv Conversations
Teams act out conversations a chatbot may have, improv style. One person takes the role of a user, another takes the role of the chatbot, and another takes notes. Then these people experiment with conversations that humans and bots may have on the given topic. Pretty simple, no? The challenge for conversation design is that it’s easy to quickly get caught into one, common sense conversation rather than really exploring the divergent options. A discussion about the weather can easily be “How’s the weather?”. But there are many others, “Do I need an umbrella today?” “How should I dress?” “Is there anything that might delay my plane?” “What’s the surf report?”. Thus it’s important that a method for divergence encourages teams to experiment with different perspectives and rational. Additionally, teams need a method that quickly tests and validates each possibility in as realistic of a situation as possible. Lastly, by having real life conversations rather than keeping our minds in our computers can help us better empathize with our users.
3. Conversation Mapping
Using Post-its to map out the conversation. This method is a mix of User Story Mapping and BotKit. Teams collaboratively make decision trees of their bot’s conversations, mapping out everything they want a bot to achieve. Each part of the conversational flow is illustrated with post-it notes, illustrating what the bot “Hears,” “Says”, and “Asks”.
Now, if you’ve seen some of the chatbot visual programmers out there (a la Pullstring, Twinery, Chatmapper or JointJS) or if you’re well versed with decision trees in general, this will feel familiar. The important change is that we are using post-it notes, the most democratic software available. Anyone on your team, with any level of experience can participate and collaborate in the concrete creation of a chatbot.
In developing and testing these fairly simple methods in workshops, I was exposed to a ton of new areas to explore, including: personality design, empathizing with services and users simultaneously, techniques in visualizing conversation, novel techniques in structuring conversation, differences in desired versus practical conversations, exploring challenges unique to designing complex conversations, and ethics for chatbot behavior. However, I don’t think I would have ever been exposed to these topics if it were not for trying to address the problems of making chatbots using design and collaboration.
My dream is that bot designers can share and learn methods for figuring out what the right kinds of bots to make, and how to make these bots well. And make weird shit too. Let me know what your problems are, what you have learned, and how I can help.