Robot for “I need help”

David Glivar
Frontiers
Published in
3 min readMay 24, 2016

--

As humans, we are masters of audibly communicating our current emotional and physical states without uttering a single word. A loud cough, a bouncy tune whistled in the hallway, a quiet sob sneaking from the stairwell. These are all examples of audible communication without the need for language, and we understand them as clearly as if someone had said “I don’t feel well” or “What a fantastic day” or “I’m so, so sad.”

We programmed Needybot to communicate its state through language and the changes of its eye. Is verbal and visual expression alone enough for a robot to be understood by humans?

During development, our highest priority — with regard to communication — was creating clear and intelligible dialog. Intuitively, this makes sense — what good is a needy robot that can’t speak its needs? Our early survey results, from those who have met Needy, reinforce that we nailed it. However, we missed developing one big chunk of communication in our rush to the finish line: nonverbal, audible cues — the type of cues that make humans so expressive.

This realization came after a certain experience between Needybot, two of my coworkers, and Needy’s archnemesis, the Ping-Pong table. Needybot was wandering the atrium as usual when its mortal foe decided to cause trouble again. Needy tried to avoid its demise, but, like a moth to the flame, it attempted to crawl up the leg of the table and got itself trapped with one wheel off the ground.

Now, I’ve seen this happen numerous times, and right as Needy is about to have another go at the Ping-Pong table is when I usually get up from my observation post to help it out. In this case, however, two of my coworkers were taking a break, chatting no more than two meters away from the tipped robot. In the name of science, I sat and observed.

Needybot’s programming kicked in to announce its predicament, “Needy doesn’t know where it is. Help a robot out?” With Needybot’s face displaying the proper “Help, I’m Lost” screen, I settled in, prepared to witness an interaction between my human coworkers and our precious Needy.

What I observed was much better than any interaction I could have asked for: the lack of one.

I sat watching in bewilderment for the better part of fifteen minutes as my coworkers completely ignored Needy in its moment of, well, need. It wasn’t until after one of our teammates saw Needy, still stuck under the Ping-Pong table, that it received salvation.

As I reflected on what I had just witnessed, it occurred to me that if Needy had been a mewing kitten, or a whimpering puppy, or some other sad creature whining in peril, my coworkers would have been unable to ignore it. But, as things stood, Needy was a stationary robot hugging the leg of a Ping-Pong table, potentially no different than a dog resting in the shade of a tree or a human napping silently on a park bench.

Needybot did everything we programmed it to do, but it became strikingly apparent that its programming was not enough. The noises that humans make without thought — coughing, whistling, or soft sobbing—were missing from Needybot’s vocabulary. Our new hypothesis proposes that only once Needy is programmed to effectively communicate its needs — both with language and the various noises that we make when broadcasting our state to others — will it reliably find the help that it requires.

The next time I see Needybot make a go for the Ping-Pong table, it will be crying hysterically with tears in its eye, and my coworkers will be rushing to its aid.

I’ll be sure to program in a sincere “Thank you.”

--

--