Computers Cry Too

Daniel Eckler
Mission.org
Published in
7 min readApr 11, 2016

Until this point we’ve focused on how machines relate to us, but what about our relation to them?

We’ve seen that as machines become increasingly adept at communication, our relationship naturalizes and we begin to project human qualities onto them. Sure, humans had been interested in automatons for centuries, but the moment Turing created the Imitation Game, it seemed to plant a seed of desire within us: to create a machine indecipherable from its human counterparts. Forever paired with this desire is an unshakeable question: at what point, if any, does artificial intelligence become a problem?

In a study conducted by Dutch scientists, participants were asked to switch off a robot cat as it begged for mercy. Even though people knew it was a robot, they hesitated. The study revealed something telling about our relationship to these objects: if a robot showed the slightest sign of intelligence, or possessed an agreeable nature, it took people three times longer to make a decision about its “fate.”

To elaborate, take this video about Atlas, the cutting edge robot by Boston Dynamics:

Even though most of Atlas’ features resemble ours on a purely superficial basis: it has arms, legs and a head — you can’t help but feel your gut shift when the hockey stick-wielding scientist pushes the robot to the ground. I even found myself developing a personality for Atlas when his box was repeatedly punched from his hands.

I appreciated his blind devotion to short-distance parcel delivery, but wished the aperture of his focus would widen, so that he could stand up to his abusive creator and fulfill his ant-like task. He resembled an infant who has mastered one skill but isn’t quite ready to tackle another, and I sympathized with his limitations.

You’re Reading Design for Humanity

Design for Humanity is an interactive essay exploring the past, present, and future of anthropomorphic design. You’re currently reading part 6 of 7.

Siri, Will You Ever Understand Me?

If we can sympathize with a robot this easily, what happens when we can’t tell them apart from the real thing? This issue warrants increased attention as robots and other technology take on larger roles in our day-to-day lives. As we mentioned earlier, significant advances have been made in AI, text-based communication, and emotional development. However, the most powerful and efficient interface for communication is the human voice.

It sounds obvious in this context, and it’s had a few million years of evolutionary development, yet we take speech for granted. It wasn’t so long ago that we adopted a mechanical system (typing, clicking, pointing) to interact with computers.

Human speech is a far more refined tool that can convey densely packed instructions far more effectively.

Typically, the speech recognition software employed by Siri, Amazon Echo and Google Now allow us to issue voice commands that free up our hands. Not to mention, our mechanical and cognitive load is far lower when we can replace a 30 step goose chase with a phrase like “Siri, what does my commute look like?”

Voice is poised to become a central interface during those occasions when our hands are busy or voice is simply the most convenient way to interact with an object.

Because of these conveniences, voice is poised to become a central interface during those occasions when our hands are busy or voice is simply the most convenient way to interact with an object.

Despite present efforts to make our technological interactions more natural through voice, the results are often disappointing. The expectation is that we’ll be able to speak to these programs like a person, but the experience is much closer to leaving a message on an automated recording.

Since we’re not quite at the point of creating machines as smart as people, we should perhaps temper our expectations. Instead of expecting to speak to Siri or Alexa as though they are real, we ought to adjust our language and tone to reflect how much they understand.

In the clip below, Captain Picard interacts with a conversational interface aboard the Enterprise and his speech is monosyllabic: “Tea. Earl Grey. Hot.” The language he used is tailored to the machine’s limited understanding of conversational nuance. It won’t be long before we can say, “I sure could use some tea.” and the computer, due to its history with us, will know that our preference is Earl Grey, hot.

Robosapiens

Once AI, voice recognition, and emotional recognition become advanced enough, we are due for fully interactive robots.

Baymax from the movie Big Hero 6 has become a recent pop culture favorite and exemplifies a robot that combines human and synthetic features. Physically, he shares many of the same qualities as a person: four limbs, a midsection, a head, two eyes.

At the same time, the synthetic quality of his features make him unmistakably robotic. His voice is the same: human-like, but not too human.

It may surprise you, but we’re well on our way to developing a real life Baymax. Pepper is as a companion robot whose main function is to perceive and respond to your emotions: it can interpret your feelings based on your voice, facial expressions, body language and words. Through a shift in the colour of its eyes and the tablet on its chest, Pepper “expresses” its feelings. Although its voice is soft, it still retains a robotic quality to remind people they’re interacting with a machine.

There are a few human traits that Aldebaran, the team who created Pepper, had to incorporate after the public interacted with a test version. Much to their disappointment, the initial design only featured four fingers, so Aldebaran quickly added another digit in spite of its functional redundancy.

When Pepper was initially developed, the team at Aldarbaran wanted it to be a real social presence, not just an appliance that hums or lights up to indicate that it’s on. While they did not want it to roam around freely, it was crucial that it established a role. Even in a state of rest, Pepper makes minor movements to indicate its presence. Four-directional microphones in its head and 3D cameras allow it to detect the presence of others. Pepper signals its awareness by illuminating his ears and eyes, a gesture that says “I see you” or “I hear you.”

The team behind another home robot, called JIBO, have expressed a similar approach to their design. VP of Design Blade Kotelly explains how his team focused on creating core expressive traits:

“When Jibo wants to indicate that something amuses him, he can smile — when you smile your cheek pushes up below your eye — and the body can animate accordingly. If we wanted to express thinking, we can slide the eye over to a corner of the screen and move the body — just like a human would do. This way, the user can quickly, and naturally, understand what the robot is trying to convey emotionally.”

Several other home robots that incorporate facial recognition have come into circulation. Each model has human elements incorporated into its design, but there is no question they are robots, and for good reason: robots that look too human can be incredibly off-putting.

Take the robot above, for example: it looks almost identical to a human, but the minor differences create a sense of unease and revulsion.

Perhaps a more palatable attempt would be something like Ava from Ex-Machina. The stark contrast between her mechanical and human features explicitly reminds us that she is artificial. Just below her highly expressive face is a carefully constructed torso that houses a network of electrical wires and cooling fans whose quiet whir can be heard throughout the film.

My Virtual Girlfriend

What if Ava actually existed? Despite her unmistakably mechanical features, there is no denying that certain design choices were made to appeal to the most primitive features at the root of our brain. It’s safe to say that if we met her, we’d consider developing a romantic relationship.

In fact, a rudimentary form of Ava already exists: My Virtual Girlfriend is a dating app that has been downloaded over 2 million times in the past two years. The app isn’t well-designed: it’s sorely lacking in usability, and despite the subject matter, it’s not much to look at. However, for a subset of men in Japan, and a growing number of people around the world, it’s precisely the emotional bond they need to fend off loneliness.

For some, it’s a vehicle to act out their fantasies; the relationship they wished they always had. Research also shows that thousands of married men have virtual girlfriends and keep it a secret from their wives.

Based on the attention these devices presently accrue in their rudimentary state, the prospect of falling in love with an artificial companion in the future does not seem so far-fetched.

Design for Humanity

An interactive essay exploring the past, present, and future of anthropomorphic design. Also available as a talk.

.

.

1: Design for Humanity

2: Apple, The Original Human

3: Conversational User Interfaces

4: A Smarter Future

5: Emotional Machines

6: You’re here!

7: The Day You Become a Cyborg

.

.

Thanks for Reading

This is an interactive + evolving essay. Please get in touch if you have thoughts regarding new content, modifications to current content, or anything else!

If you enjoyed reading this article, please hit the ♥ button in the footer so that more people can appreciate great design!

Hi, I’m Daniel. I’ve founded a few companies including Piccsy (acq. 2014) and EveryGuyed (acq. 2011). I am currently open to new career and consulting opportunities. Get in touch via email.

This article was co-authored by Shaun Roncken.

--

--