Human Enough

karen kaushansky
Chatbot News Daily
Published in
4 min readOct 23, 2017
An inspiring setting for our AI Retreat. And yes, there was a post-it note showdown in our meeting space.

I was invited to join a group of 20 designers, creatives, thinkers, futurists, and novelists to a retreat at the Juvet Landscape Hotel to discuss designing the future with AI. We arrived with questions; some of the most pressing on my mind and on others — how human-like should we design our robots, conversational assistants, and AI, and will building robots to be human-like actually limit their potential. I was looking for new ways to think about human-AI interactions.

I came away from the retreat with more direction than before, and more importantly with a better understanding of the landscape (Juvet landscape) and context to think about these questions.

My current answer: we should design our robots and assistants to be human enough; not too human lest they overpromise.

Andy Budd of Clearleft, who organized the retreat, suggests that we are following the process of skeuomorphism with robots. Any likeness to humans is so humans will understand their capabilities; human-like robots help signal what the robot can do. So today we make our robots seem human, and use language and expression, human metaphors, body parts and mechanics in order to set expectations. Since the robot has ears, we understand it can listen; the robot has legs, it can walk. We need to go through this phase so one day we can accept non-human like robots whose intelligence and capabilities will likely surpass ours. As Bert Brautigam said “But we will see a trend of de-skeuomorphization of voice interfaces just as we witnessed it for visual interfaces.” and for robots too.

Today we need those metaphors and cues because it is what we understand, and we know what is expected of us.

But where do we draw the line; what is human “enough”?
Enough so we know what is expected in our interactions with them.
Enough to ensure we, the humans, don’t hate them.
But not too human; there are non-human aspects that we should explicitly design.

The robots, chatbots, personal assistants need to explain in blatant terms what they are capable of; they need to set expectations in explicit, non-human or non-natural ways. Though it IS human to let others know what I am capable of “No I can’t read that book because I don’t speak German”, “No I can’t build a house for you, because I don’t know how.”, setting these expectations may need to be forced at times.

This quote from David Rose, author of Enchanted Objects, in an O’Reilly podcast, sums it up “If you tout it as you can ask Siri any question, it’s incredibly hard to fulfill on the promise of ask any question. If we had more of a framing she’s the perfect person to ask about restaurants or searching for directions then she would work 95% of the time rather than failing you 40% of the time.”

Our jobs as designers is to frame.

So how do I design with “human enough” in mind? Designing conversation assistants and natural language interfaces that are human enough should:

  • Recognize the state and context of callers. Know as much as you can about the person you are interacting with. Is it someone calling an IVR; they might be calling as a last resort when mobile or web didn’t work. Is it an unrecognizable voice interacting with Google Home on a Friday night, what context could be inferred and used? Be realistic about the reasons people call and the information they have or might be lacking when they do. Instead of asking “What’s your itinerary number?” it might make more sense to ask “Do you have you itinerary number?” if the human says no, then continue from there.
  • Design with uncertainty in mind. Humans can be unsure, forgetful, not always logical. The CogniToys Dino (powered by IBM Watson) asked my 5 year old “Which one is a fruit — steak or watermelon?” and she said “I don’t know” which wasn’t recognized. Be able to recognize things the fuzzy, natural responses like “around noon” or “end of March”.
  • Set expectations explicitly. Be very specific of what the assistant can and can not help with. “I can help with that” or “Let me collect a little information to speed up your conversation with an agent” will set the guardrails for the conversation and help build the user’s confidence.
  • Invest in knowing what you DON’T KNOW, and can’t help with. This is a lot of work for what might seem like nothing, but you can build an assistant or robot to recognize a broader set of intents, it just might not know how to execute them. “Oh, I can’t actually book reservations — but I could place a call to the restaurant for you.”.

My work is often to design the vision of where these experiences and interactions will go, unconstrained by current technology. So what might be the goal, beyond “human enough”? How should we design our de-skeuomorphic robots? Well, if we figured out all the answers in this year’s AI retreat, what would we talk about next year ;)

This was one musing/train of thought from the AI Retreat — there will be many outputs to share from others. Thank you to Andy Budd for the invitation and Dan Saffer for the intro. And thank you Ben Sauer for letting me bounce these ideas of of you.

--

--

karen kaushansky
Chatbot News Daily

Senior Conversation Designer at Google. I tend to work on things 5–10 years out: speech recognition, biometrics, autonomous vehicles, conversational interfaces.