In Pursuit of Inclusive AI

Five lessons for humanizing technology

Inclusive Design
Microsoft Design
Published in
7 min readOct 1, 2018

--

By: Joyce Chou and Roger Ibars

Last year we published a conceptual framing tool for identifying bias in AI, How to Recognize Exclusion in AI. It was a way to address an imminent concern in the tech industry: the emergence of gender, racial, and socioeconomic bias in seemingly harmless technological experiences.

A year later, that framework still speaks to urgent issues. It calls on the industry to analyze the state of AI development and slow the astronomical pace driving AI into unpredictable corners of technology. Recently Microsoft took an ethical leap forward, advocating public regulations for the ongoing work in the field. In the same way, designing for AI requires an honest examination of the moment we’re living in and our responsibility to curb exclusive behaviors in technology.

Today, machines often learn from signals and examples; sophisticated programming similar to what we do as humans. We recognize a familiar face at a party; we ride a bike after years of absence. Machines can acquire a kind of tacit knowledge yet still lack human nuance.

It’s that trick of nuance that sparked conversations and research around inclusive AI at Microsoft. The field of AI is growing rapidly, and we’re all learning in real time as we experiment with solutions and grapple with unexpected outcomes. As part of that shift, the Inclusive Design team has partnered with research, engineering and legal groups across Microsoft to bridge the gap between high-level principles and every day practice.

In the spirit of shared knowledge, we’ve summarized five insights to identify exclusion and design more inclusive AI.

1. Redefine bias as a spectrum

Conversations about AI are often polarized between “good vs. evil”, but teams building AI products and services have a difficult time relating to the most offensive examples of bias that grab media attention. Rather than focusing on the most extreme cases, we learned that teams engage faster with AI bias issues when they consider a spectrum of bias, where bias can show up in small ways in our everyday experiences.

We’ve learned that AI needs to be overseen to avoid major pitfalls and address subtle and seemingly mundane microaggressions. Think about a time when you felt like a product was not made for you, but you couldn’t explain why. These tiny uncomfortable moments build up over time, cause feelings of exclusion, or simply makes the product feel off. Products with small moments of bias are not good products. AI bias isn’t the end of the world, but rather the early stages of good intent gone sideways. It’s our responsibility to recognize the understated risks and design accordingly.

2. Enlist customers to correct bias

Training is everything when it comes to building more inclusive AI. Unfortunately, development for AI often happens behind closed doors, restricted to input from teams that may not be representative of the diverse customers they design for. It’s a humble exercise for teams to reflect not only on the applications of the AI they build, but to consider the implications of the technology in the wild. We’ve learned that empowering people to continually train AI will develop more inclusive intelligence and ultimately build trust.

Just in the past year, we’ve seen an uptick in conversations surrounding ethics in AI design, including countless articles exploring issues of transparency, accountability, agency, and more. These concepts have led to large-scale open-source projects like synthetic simulators to improve self-driving cars or a crowd-sourced initiative to train speech models in a more natural voice. Gaining these customer insights earlier in a safe training environment can curb the potential for unintentional or alarming outcomes.

3. Cultivate diversity with privacy and consent

The common conceit of AI is that it grows smarter over time. Yes, a machine will improve its understanding as it learns from the data it’s fed. But inclusive AI also depends on those datasets being more diverse, correctly labeled, and used in a way that’s representative of every customer. If there’s any bias in the system (and there’s always bias in the system), it only exacerbates that bias. For underrepresented people, there’s little incentive to participate in something that’s broken for them, especially if they think that information they provide could be used against them. And without their data, the cycle of learned bias in AI continues.

There’s a basic understanding that we give up some vestiges of privacy for the conveniences of the modern world, but often privacy controls are poorly designed and convoluted for everyday customers. It’s difficult to feel in control. The adoption of the General Data Protection Regulation (GDPR) has bolstered improvements in the industry, but privacy-by-design needs to be foundational, not reactive. Rather than user agreements full of inaccessible legalese, we need touchpoints for consent all along their journey, design that values autonomy foremost.

4. Balance intelligence with discovery

AI makes a lot of assumptions based on our past behaviors, and often there’s little flexibility to understand our present intentions. And we’re uncomfortable when those assumptions create echo chambers, digital bubbles, broken record suggestions and irrelevant content. There is an inherent tension between machine intelligence and the human desire to create and explore in new ways. There needs to be strategic moments that build a more natural relationship between humans and technology, valuing patience and creative exploration.

Customers should always feel like they have the option to change course and shape the goal of their experiences. We’ve learned that this doesn’t always feel like the case because we don’t have a clear understanding of what we can expect from AI. We’re lead to believe that AI is intelligent out-of-the-box, but this narrative needs to change. If customers know AI services are limited in the beginning and still need help to learn, maybe they’ll be more willing to help train AI with their unique idiosyncrasies.

5. Build inclusive AI teams

AI reflects the people who build it, as much as we might want to believe in its neutrality. Hiring diverse backgrounds, disciplines, genders, races, and cultures into the teams designing and engineering these experiences is critical. “Artificial intelligence will reflect the value of its creators,” says Kate Crawford of the AI Now Research Institute. “So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.”

We’ve learned that teams with diverse outlooks can identify biases more easily. By building inclusive teams, we can foster empathy and begin training AI to do the same. Teams must be open-minded, take accountability for unintended mistakes, and approach public dialogue with humility. They need to be thoughtful and deliberate, ever mindful of the bias inherent in their designs.

Humanity-centered design

There’s no magic formula for all scenarios, nor should there be. If we’re attempting to build AI that really helps and understands us, we need to come at it from a human perspective. We can’t place implicit trust in future machines just because we’re involved in building them — humans are complex and full of doubt. It’s okay to recognize that, and to fail gracefully. In those moments, we can slow down, and consider why we’re moving in a certain direction, invite more people into the creation process, and keep improving.

Human nature is flawed. But it’s also wonderful — unparalleled in its complexity. We feel compelled to connect, engage, fix problems, challenge our perspectives, and always move forward. Let’s mirror our best intentions and work together for better AI outcomes by design.

Learn more about how to design with inclusive in mind by downloading our Inclusive Design tool kit today.

Contributors to this article included Danielle McClune and Izzy Beltran (illustrations). The Inclusive Design team is grateful to all of our Microsoft partners at Microsoft Research and Insight for thought-provoking discussions and invaluable support in developing inclusive technology.

To stay in-the-know with what’s new at Microsoft Design, check out our new website, or follow us on Twitter and Facebook. And if you are interested in joining our team at Microsoft, head over to aka.ms/DesignCareers.

--

--