How to make your robot appear smarter than it is.

At least have the decency to feel bad about failing.

The following is my takeway from a SXSW Interactive Panel titled “One Robot Doesn’t Fit All.”

The panelists, who are all much smarter and better-looking than I am, were Leila Takayama, Wendy Ju (Wendy Ju), Thavidu Ranatunga (@FellowRobots [twitter]), and Nuri Kim

My interpretation of their panel is somewhat colored by my own interests, and isn’t, in all likelihood, the truest or most perfect recollection of what they had to say, and should any of those poor souls come along and read my pale reflection I can only offer my sincerest apologies.


There is a price to social incompetence. For humans, that price can come in the form of a lack of critical relationships, a lack of access to jobs or housing, a lack of freedom — I haven’t run any numbers but I’m willing to bet high social incompetence is correlated strongly with higher incarceration rates.

Basically, if you can’t play nice with others, nobody wants to play with you.

The same, interestingly, is true of robots. And that’s the basic takeaway I got from the panel.

The name was a bit off, because while the panelists did address the concept of general-purpose vs unitasker robots (robots that can attempt to do everything vs robots that perform only a couple of functions very well) the majority of time was spent discussing how to make robots acceptable to people. And the key, it turns out, isn’t so much in how they look (having a face, being bipedal, having eyes, basically appearing in some way to be human) but is instead in how they act.

They started with this: the Amazon Echo.

Eventually I will buy things for you that you’ll want with an 80% predictive success rate. Enjoy your Malcom Gladwell books, your Firefly DVDs, your video games, and your fake plastic vomit, human.

And the question that came up was — is this a robot? In many ways it is. It performs a function that a human needs and it can act autonomously. But the panelists felt there were issues with two key aspects: performance and anticipation. You don’t know what this thing is doing, it just sits there. It doesn’t look to people for cues or feedback, and it doesn’t give you any cue that it is about to act. These are extremely problematic things.

The panel was somewhat divided on the concept, although I don’t remember who came down where beyond Ju who really, really, really didn’t like the machine.

But if there WAS a consensus I think it would be this: a very powerful AI is not sufficient to create a good robot that humans will trust and develop a rapport with.


So instead they showed us a garbage can on wheels.

This is how you get Daleks.

This trash can wasn’t actually a robot, but the controller stood out of the field of view, and drove it around to people to see how they would react. Here, in a nutshell, is what the researchers found:

  • People expected the robot to be pleased with the garbage they gave it, and were happiest when it performed in a way that showed it was happy.
  • People anthropomorphized the robot, and thought it wanted to “eat” trash.
  • People that wanted the robot to come near them would whistle at it, or hold out garbage like you would a bone to a dog.
  • People that did not want the robot to come near would avoid looking at it, or they would shrink inward and turn their body away.
  • Nobody ever said “come here, robot, and take my garbage.”
  • People were surprised and annoyed when the robot came near them when they clearly didn’t have any garbage.
  • People imbued the robot with far greater intelligence than a trash can deserves, in almost any case.

So what did I learn from this? People like robots that have clear purposes. I would take that a step further, and say people like THINGS that have a clear purpose. I went into this panel trying to think about how to relate the information I learned to the work we do, and that, I think, is the clearest lesson. Humans like to know what a thing is for, and they like to know how to operate it. Robots are no different, their affordances must be highly discoverable. A mobile trash can has pretty specific affordance, and if you wave trash at it, you expect that to work. If it doesn’t, you blame the machine — and you should blame the machine, or, really, you should blame the designer who didn’t realize people would expect a trash can to DESIRE trash.

It should also be noted that there were no “human” elements to this trash can. It didn’t have a face or a voice or a head, it only had four wheels and a cylinder for a body. But it could perform. It could spin or dance, it could come or go, it could hesitate or move boldly. It had an attitude that could be understood by a person.


Another robot they looked at was Shimi, a robot that listens to music and likes the same kind of music you like:

If the music Shimi likes is bad, you have nobody to blame but yourself. He doesn’t know better.

Shimi is basically DJ Roomba

DJ Roomba, on the other hand, has impeccable taste.

But Shimi doesn’t DO anything else. It listens to music, taps it foot, dances with you, nods its head, and picks some tunes. Still, a person can become sympatico with it, in a way that it probably can’t with Amazon’s Echo, even if Echo has an extremely advanced algorithm driving the choice of music it submits for your approval. Echo can’t dance.

Then they looked at robots that don’t just play music, they make it.

This robot has better rhythm than I do.

This is Shimon. Shimon can play the Marimbas with a person. Shimon can nod her head to the rhythm. Shimon can look to a person for cues (seriously, watch the video, it’s uncanny). But it also imparts information in the same ways people do. When Shimon nods her head, she’s communicating both enjoyment and rhythm — a beat. Shimon is relying on a person to provide that information as well as the key to be played in, but she never explicitly asks for it. She only requires that you start playing, the she performs and plays her part.

Unfortunately for the next piece, I don’t have access to any of the videos or images they used in their presentation. So I’ll just have to draw it.

I can feel you judging me and it hurts.

Look at these two crappily-drawn robots. What is the one on the top trying to do? The same thing as the one on the bottom. But the one on the bottom is “performing” the act of trying to figure out how the door works (it turns out doors are really really really hard for robots, knobs especially)

The story one of the panelists (Takayama?) told was that she often had problems with robotics engineers in her lab getting very angry with her for crossing through a robots visual threshold while it was trying to figure out how a door worked. This, despite the fact the robot was just sitting there, not apparently thinking or straining or working. But it was, it was crunching an insane amount of spatial data. It just didn’t have the decency to look like it.

Furthermore, whether it succeeded or failed to open the door, it apparently felt the same about both outcomes. A robot that succeeds should perform success. A robot that fails should, and I’m quoting directly here because I thought it was so charming, “have the decency to feel bad about failing.”

Stuff like that matters. Humans don’t like failure. Why should robots?

So the key for robots isn’t necessarily functional competence. Humans don’t think much of robots that can perform a task very well. A slick-looking robot that can open a door with a high degree of success isn’t credited with as much intelligence as something that looks like WALL-E that fails to open doors if WALL-E at least looks like he was thinking about how to do it and felt “sad” when he failed.

Social Competence is perceived as Functional Competence. And when you are in the business of creating things that respond to human action, as we are, I think that this is a critical piece of design knowledge.

The key to discovering a robot’s affordance is performance. The robot must perform using body language in a way that is recognizable to humans. We rely on social cues to read one another and understand intention. Without knowing a machine’s intent, it is somewhat of a mystery to us and makes us wary or we think it is broken or stupid.

Two other quick lessons: “robot” can also be defined as something that is not autonomous, but is an extension of a human body or will.

We have one of those at my office:

The interesting thing about these is that studies have shown people expect the same kind of bodily autonomy and control when running around in one of these bodies as they do when they are in their own body. They don’t want people to come up and mess with their screen or redirect their robot without permission. They expect social privilege the same as if they were there.

Final note (potentially in conflict with the previous): humans perceive robots as permanently low status. In fact, little better than slaves. We expect them to eat our garbage and be thankful. We, I’m certain, expect the same of assistant systems and artificial intelligences. As designers, we should remember this when making demands of users. Users’ unwillingness to have demands made of them is not a question of laziness, it’s a question of a system knowing its place in the social hierarchy: dead last. We might be friendly with our servants, even tolerant of their quirks, but we’re not going to accept a servant that makes too many demands of us or is inscrutable or aggressive. Such a thing is going in the trash heap. That’s the price it pays for its social incompetence.