Data, Ethics, and AI

Practical activities for data scientists and other designers

IDEO
IDEO Stories
5 min readOct 12, 2018

--

by Michael Chapman, Ovetta Sampson, Jess Freaner, Mike Stringer, Justin Massa, and Jane Fulton Suri

A design researcher, a business designer, and a data scientist were sitting at a bar. It sounds like the setup of a joke, but the conversation was serious. We had just begun work on a project exploring how we might help a medical-liability insurance company decrease the incidence of adverse medical events. As we talked, our conversation shifted away from excitedly imagining ideas for data-driven tools toward ways our work might do harm.

What if our designs ended up raising premiums for doctors and thereby healthcare costs for patients? What if they kicked doctors off their insurance? What if doctors stopped reporting adverse events to keep their premiums from rising? There was real potential that our work could result in reduced patient safety or increased cost of care. We could inadvertently build a tool that could be used against the very people we were trying to help.

Today, data systems and algorithms can be deployed at unprecedented scale and speed. Unintended consequences will affect people with that same scale and speed.

Our design team and our client had the same intent to help people, but we realized we would need more than our own values to get there. We would need concrete activities to make sure we, and our client, were doing our best to ground our work in good human-centered practices and mitigate potentially harmful consequences.

We are far from the first people to ponder this. We’ve been inspired by organizations like AI Now and Data + Society, books like Weapons of Math Destruction and Technically Wrong, academic communities like FATML and CXI, and tech companies like Google and Microsoft weighing in with their points of view. In particular, we’ve been eagerly following O’Reilly’s series on data ethics and encourage you to read their free eBook Ethics and Data Science.

We are excited to be a part of this conversation. We hope that our voice contributes a new perspective, rooted in IDEO’s culture of pragmatic optimism and building to learn.

How might we always put people first while designing large scale systems that will change over time and evolve without direct human supervision?

The seed that was planted during that bar conversation has grown into a set of principles and activities that our teams use today within IDEO (both data scientists and designers from every other discipline) to ensure we’re intentionally designing intelligent systems in service of people.

We developed these principles and practices using a design thinking approach. We interviewed teams about where they found challenges. We spoke to clients about where they saw intelligent systems go awry. We spoke to the public about where smart designs seemed to cross lines. We observed and read about AI systems that had gone off the rails and worked to understand how this might have been avoided. We iterated on our initial designs, and we are continuing to do so.

1. Data is not truth

Data is created, generated, collected, captured, and extended by humans. Even “raw” data is never truly untainted. The fact that any data set exists means that a human has already decided what signal to capture and how to go about capturing it. That’s why data is always incomplete and messy, and needs to be interpreted and analyzed. It can be biased through what is included or excluded, how it is framed, and how it is presented.

2. Just because AI can do something doesn’t mean it should

Projects should be rooted in a deep understanding of and respect for human needs. While AI may be a viable component, it may not always be the most (or only) desirable solution. Remember that machines can be great servants but are terrible masters.

3. Respect privacy and the collective good

While there are policies and laws that shape the governance, collection, and use of data, reach for a higher standard than “Will we get sued?” Learn deeply about both individual and societal values and priorities in the given context to reveal where individual sensitivities about privacy and data protection overlap or conflict with community benefits.

4. Unintended consequences of AI are opportunities for design

As with all design challenges, it won’t be right the first time. Use unexpected outcomes and newly discovered consequences as opportunities.

Guided by these principles, we have been focusing on several specific activities as we explore how to ethically design intelligent systems.

These are a beginning, and we’re sharing them now because we know they will benefit from a conversation within the community. We’ll continue evolving these activities, expanding and refining them as we learn more about what works in practice. We hope that these activities provoke dialogue and provide concrete ways to help our community ethically design intelligent systems.

We are looking for your complications, confirmations, and thoughts to help evolve these into tools that lead to making better things that help more people. Please highlight and comment each of the activities and exercises on Medium; we’ll be actively participating in the conversation. We hope you’ll join in.

Thank you,

Michael Chapman (Design Director, Design Research)
Ovetta Sampson (Senior Design Research Lead)
Jess Freaner (Data Science Lead)
Mike Stringer (Executive Design Director, Data Science)
Justin Massa (Executive Portfolio Director)
Jane Fulton Suri (Partner Emeritus)

--

--

IDEO
IDEO Stories

We are a global design consultancy. We create impact through design.