Humanity-Centered Design

How ethics will change the conversation about AI

Ruth Kikin-Gil
Microsoft Design
4 min readSep 30, 2017

--

“Technological change is not additive; it is ecological, which means it changes everything and is, therefore, too important to be left entirely in the hands of [technologists].”
— Five Things We Need to Know About Technological Change by Neil Postman

An AI may describe an image in various ways. Nothing is neutral: an AI that interprets the world is an AI that creates a worldview. Photo by Meg Via Flickr

AI is coming to get us, and everyone knows that. The singularity — the moment when computer intelligence far surpasses human intelligence — is right around the corner, and when it arrives we’ll be jobless and living in an apocalyptic era whose nature we can only guess. Welcome to the future.

Today, we are using a type of AI called artificial narrow intelligence (ANI). ANI can generally do one thing very well, sometimes better than humans can — like the AI that defeated the world’s best Go player — but if asked to perform a task outside of its specified function, it wouldn’t be able to. We are surrounded by instances of ANI: autonomous cars, Cortana or Alexa, even some of the news we read is generated by AI.

ANI may have limits, but it’s powerful, useful, and everywhere. As a result, a growing number of designers will be working on AI-driven technology. Will it be technology that improves life and makes our world better? Or will it reinforce negative behaviors, strengthen biases, and increase inequality?

Today, we generally talk about AI only from a technological perspective: What’s powering its functionality? Is it machine learning, computer vision, or a bot? Instead, let’s shift the conversation from technology and features to how AI changes lives. Viewing AI through a lens that focuses on its impact on humans can highlight what using that AI really means to us. And changing the language we use to talk about AI can expose important ethical issues. New vocabulary that classifies AI by what it does, rather than how it works, may have terms like “AI that interprets reality,” “AI that augments senses,” or “AI that remembers for humans.”

Nothing Is Neutral

Microsoft’s “Seeing AI” is an app and glasses-like device that helps blind people “see” the world around them by hearing descriptions of it. It’s an amazing and helpful invention, and an example of inclusive technology. If we view it through a technological lens, we see a product that uses computer vision and facial recognition as well as natural language generation. Now, let’s focus less on how it works and more on what it does. If we evaluate how the AI impacts humans, we see it as an “AI that interprets reality.” Suddenly, it’s clear that we need to pay attention.

Look at the image above. What do you see?

This is a hypothetical example of how an AI could be programmed to interpret an image. All the descriptions are correct, but the AI’s choice of words determines how the blind person experiences the world. What if Coca-Cola were to offer to sponsor the device so that it’s free but ask for one thing: whenever the AI sees a Coca-Cola product, it mentions it explicitly, while it uses generic terms for any of their competitors? There’s a dramatic difference between “a woman drinking a soft drink” and “a smiling, pretty woman drinking a Coke,” though both phrases correctly describe the same situation. We need to think through these kinds of scenarios when we design products.

The Rise of the Product Ethicist

With this shifted focus on how we view AI, a new profession should also emerge: the “product ethicist.” Someone in that role would aim to keep the product design honest. While not only one person should think or care about ethics, a “product ethicist” could provide more nuanced thinking to guide the design process and hold everyone accountable.

Further, communicating an AI’s potential impact to its users is as important as designing an ethical product — it’s actually part of being ethical. Food products have labels with ingredients and nutritional facts to help consumers choose what to buy. What labeling system could help them decide on the right AI product?

Designers put people first. They empathize, observe, and listen. They find problems to solve not because they are technically difficult, but because they are hard human issues. How to use AI is one of these challenges — and humanity-centered design could be the solution.

Originally published in Issue 35.2 of ARCADE magazine. Tuesday 19th Sep 2017, Fall 2017

To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers.

--

--

Ruth Kikin-Gil
Microsoft Design

Product designer, Responsible AI strategy at MSFT. I have a love-hate relationship with tech. These are my opinions. [She/Her]