Image for post
Image for post
Photographer: unknown

What’s more likely: that we should fear AI, or that AI should fear us?

Jennifer Aue
May 9, 2019 · 6 min read

Foreward

Before you read this story, let me tell you the story about this story.

A while back I was asked to write about one of my own, unique perspectives on AI for a magazine. The editor and I collaborated and agreed on this rather click-bait sounding headline. Although I’ve extensively considered and believe in this story’s concept as a valuable thought experiment, it’s long tail misalignment with my employer’s real-time perspectives on AI left me, the editor, and IBM Communications at an impasse.

This question was not about if I share IBM’s values. I do.

The question was about finding the right balance between two of IBMs values: “Innovation that matters, for our company and for the world,” and “Trust and personal responsibility in all relationships.” Not to mention my own moral commitment as a designer to imagination, asking questions, and the value of communicating and discussing ideas.

As you can see, you’re not reading this article in a magazine. It’s taken me a long time to decide if I would put it up on Medium. I’m sharing it now only because I think the story ABOUT this story is important for IBMers to hear and discuss, and might be helpful for anyone who has struggled with speaking publicly, speaking creatively, and speaking responsibly.

Where I landed with my internal debate on how to share creative ideas responsibly in public settings as an IBMer:

In this time when the words and actions of individuals can have effects on the future of technology and the future of IBM far beyond our line of sight, trust and responsibility must be essential in every aspect of our communications, especially for those active on grassroots platforms. Imagination, collaboration and debate are equally crucial components of IBM’s values and success. Be creative. Ask questions. But be clear with the audience about the difference between personal, creative hypothesis and IBM’s actual reality.

I want to thank the IBM teams who define and craft IBM’s communications and carry out the mighty task of helping all of us understand and communicate both responsibly and authentically.

And now…

What’s more likely: that we should fear AI, or that AI should fear us?

Disclaimer: In all of my writing, I aim to represent IBM’s commitment to create augmented intelligence that will help humans achieve their goals faster, smarter, and with greater confidence; and to be transparent about Watson’s identity, presence, and the evidence behind it’s conclusions. My stories with themes of hypothesis and future imaginings are in service of IBM’s history of innovation, belief in creative thinking, and support of wild ducks. They do not represent or reflect IBM’s positions, strategies or opinions.*

Jennifer Sukis, IBM Design Principal for AI and Machine Learning. Austin, Texas
*At least in the year 2019 :)

In light of all evidence that human biases towards different cultures, beliefs, and each other are alive and well in society today, it’s time we consider the consequences of these behaviors in relationship to another topic dominating headlines — the race to create artificial intelligence that will learn from our knowledge and behaviors.

While the tone of headlines trend towards AI being an ominous threat (personally I’d say an unfounded threat), there’s a different scenario we should be considering: when AI comes face-to-face with humans, what values will it learn from its interactions with us?

When AI comes face-to-face with humans, what values will it learn from its interactions with us?

We glimpsed at an answer to this question back in 2015 when Boston Dynamics released their videos of Spot, an autonomous robot that looked more like a dog than a machine. As Spot gingerly walked through the company’s cubicles, a man kicked it in the stomach without provocation. If you’re like me, this was confusing and hard to watch this little guy being “hurt.” Testing hardware is one guess at this guy’s motivation. Attention seeking and peer approval is another (Update: since this was published, new videos of Boston Dynamics physically abusing robots has gone viral).

If you see this as Boston Dynamics doing their job to build stable robots, imagine what would happen if you left a Pepper robot alone on the streets today. How long do you think it would be before he/she/it was vandalized, harassed, or pushed into oncoming traffic? Or taught to violate human rights on its own like Microsoft’s bot, Tay? (You didn’t think you were getting out of this article without a Tay mention, did you?)

What does it matter? Machines don’t have feelings.

Yes, we know better than to anthropomorphize robots and turn rocks into pets, but we do it anyway. Although our tendencies to give inanimate objects feelings is often criticized as a tool for manipulating people to blindly trust the creator’s intentions, there’s another side to this discussion that must come into play.

Image for post
Image for post
Tomomi Ota pushes a cart loaded with her humanoid robot Pepper on a Sunday afternoon for her weekly walk. Photographer: Nicolas Datiche

Now that we’re building machines that learn through experience, we have to consider what we’re unintentionally teaching them in the form of openly available documents, videos, images, sounds, actions, and interactions — all loaded with our subconscious instruction manual for innate human biases.

Out of all this learning will be the natural emergence of basic AI drives — goals or motivations that most artificial intelligences will have or converge to. Think of it like an AI’s understanding of basic good and evil — as children, we are not explicitly taught every common social code. We learn a little through what our parents and teachers tell us, a little through experience, and then make inferences for the rest of the “rules.”

AIs will do the same: rather than hard coding ethics, we’ll give them the basics, then it’s up to their neural networks to interpret and learn from new situations. Whatever these basic AI drives turn out to be, they’ll being determined by what it’s is learning from watching our behaviors.

If we ever succeed in our mission to create AGI, or Artificial General Intelligence— and believe me, that’s a big if — then we need to stop defining success on this mission as a recreation of ourselves. We don’t want to recreate and magnify our own shortcomings. We want to create something that represents the best in us.

If we stop measuring machines against the Turing Test and start asking, “how do we give AI the chance to become something more morally reliable than us,” just as we do with our own children, maybe we can prevent it from learning the dangerous behaviors we can’t seem to unlearn ourselves.

Image for post
Image for post

Whether you’re worried for the robots or worried for humans, one thing is certain: protecting them is the same as protecting ourselves. We’ve had some success protecting people by declaring their rights. Some companies, like mine, already have a POV on ethical AI. Perhaps it’s time we start expanding those rights to protect any intelligent entities we encounter — or create.

Perhaps it’s time we start expanding those rights to protect any intelligent entities we encounter — or create.

Given where our vision for AI is heading, defining how this new intelligence will be treated by humans should take priority over our fears of how it’s going to treat us.

Jennifer Sukis is Design Principal for AI & Machine Learning at IBM and Professor of Advanced Design for AI at the University of Texas. The above article does not necessarily represent IBM’s positions, strategies, or opinions.

IBM Design

Stories from the practice of design at IBM

Jennifer Aue

Written by

IBM Distinguished Designer & Director for AI Transformation. Advanced Design for AI professor at the University of Texas. Co-host of AI Zen with Andrew and Jen.

IBM Design

Stories from the practice of design at IBM

Jennifer Aue

Written by

IBM Distinguished Designer & Director for AI Transformation. Advanced Design for AI professor at the University of Texas. Co-host of AI Zen with Andrew and Jen.

IBM Design

Stories from the practice of design at IBM

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store