When Bias in Product Design Means Life or Death

Carol E. Reiley
6 min readNov 16, 2016

--

During my PhD studies, I had programmed a demo using Microsoft’s speech recognition API to build a human-robot interface to showcase our autonomous surgical robotic system. But, because the API had been built mainly by 20–30 year old men, it did not recognize my voice. I had to forcibly lower my pitch in order for it to work. As a result, I was not able to present my own work. Although I had built the demo, when showing the system to distinguished visitors at the university, we would always ask a male graduate student to lead the demo because the speech system recognized their voice but not mine.

AI speech recognition systems have improved dramatically since the early 2000s, but there are still many struggles with AI products. This is problematic when your target customers fall in one of these groups, such as when Hello Barbie, Mattel’s artificially intelligent Barbie, struggled to recognize the voices of the very children intended to play with it (this on top of the highly controversial and even sexist content Hello Barbie was programmed with).

While Hello Barbie and my own AI experience during my PhD studies were both disappointing, they point to much heavier consequences in the larger picture. The lack of diversity and inclusion in AI, and in overall product development, is not merely a social or cultural concern. There is a blindspot in the development process that affects the general public. When applied to products where safety is a factor, it becomes a question of life and death.

My intention is not to simply complain about the lack of diversity in engineering or the lack of constant product testing (both of end user testing and of diverse test cases). However, because of the dire consequences, I cannot stand quietly by and feel it’s my duty to push the industry towards change, especially as AI becomes an increasingly large part of our lives.

Bias in Automotive Design and Safety

Every time you step into a vehicle, you’re putting your life into the hands of the people who made the design and engineering decisions behind every feature. When the people making those decisions don’t understand or account for your needs, your life is at risk.

Historically, automotive product design and development has been largely defined by men. In the 1960s, the vehicular test crash protocol called for testing with dummies modeled after the average male with its height, weight, and stature falling in the 50th percentile. This meant seatbelts were designed to be safe for men and, for years, we sold cars that were largely unsafe for women, especially pregnant women. Consequently, female drivers are 47% more likely to be seriously injured in a car crash. Thankfully, this is starting to shift — in 2011, the first female crash test dummies were required in safety testing — but we are still building on 50+ years of dangerous design practices for automobiles.

Gender is only one area where there is a serious lack of diversity in design and engineering. This is equally problematic when it comes to race, ethnicity, socioeconomic class, sexual orientation, and more. For example, Google’s computer vision system labeled African Americans as gorillas, while Microsoft’s vision system was reported to fail to recognize darker skinned people. Today, one of the most prominent applications of computer vision is self-driving cars, which rely on these systems to recognize and make sense of the world around them. If these systems don’t recognize people of every race as human, there will be serious safety implications.

I hope we never live in a future where self-driving cars are more likely to hit one racial group or prioritize the life of some races over others. Unfortunately, when a single homogeneous group is designing and engineering the vast majority of technology, they will consciously and unconsciously pass on their own biases.

A Wider Inclusion Problem

Beyond transportation, we are relying on technology for basic needs like food, communication, education, and much more. For us to live in an equal society, our technology must serve and treat all segments equally.

Let’s acknowledge that there is a widespread diversity and inclusion problem with AI today. It doesn’t take more than a quick Google search to uncover a handful of disappointing occasions where we’ve seen AI blatantly discriminate against groups (even those with buying power), including:

  • Amazon’s Same Day Delivery service was biased in favor of white customers in some regions; for example, the report found that black residents in Atlanta, Chicago, Dallas, and Washington were “about half as likely” to be eligible for same-day delivery as white residents.
  • Microsoft’s Tay, an AI “teenage” chatbot which learned from interactions. Within 24 hours of being exposed to the public, Tay took on a racist, sexist, homophobic personality and had to be taken for an indefinite time-out.
  • Apple HealthKit which enabled specialized tracking, such as selenium and copper intake, but neglected to include a women’s period tracker until iOS 9.
  • A study conducted by Lancaster University which concluded that Google’s search engine creates an echo chamber for negative stereotypes regarding race, ethnicity and gender. When you type in the words, “Are women…” into Google, its autocomplete choices are “…a minority,” “…evil,” “…allowed in combat,” or, last but not least, “…attracted to money.”

Despite these recent blunders, the tech community has simply moved past denial to acceptance. That’s not good enough. We need to take direct and actionable steps toward changing AI culture and creating products that all segments of society are considered, especially the ones that do not have a voice. So how do we do that?

Safer Products Begin with Diverse Teams

While one segment can have a certain level of empathy when developing products for others, we have seen time and again how having a homogenous team results in designs biased toward that particular group. It’s tempting to think that the sheer size of data available can get around the issue. Unfortunately, that’s not the case. These algorithms work by comparing one data model to similar ones from the past and it takes a healthy dose of human judgment to figure out what the right context for a particular data model is. Engineering teams have a significant hand in defining these models and, thereby, the resulting technology.

Even the most well-intentioned developers can be simply unaware of their own narrow perspective and how that may unconsciously affect the products they create. For example, white men often overestimate the presence and impact of minorities. Geena Davis Institute for Gender in Media found that white men viewing a crowd with 17% women perceived it to be 50–50, and when it was 33% women, they perceived it to be majority women. A simple overestimation like this illustrates how difficult it can be to see the world from another’s perspective.

Because software engineering, especially in the nascent self-driving car space, is a high paying career path, many have advanced the cause of giving more minority groups these opportunities. However, there’s an even more important reason for minority groups to work in the autonomous driving field — the opportunity to design products that will impact and improve millions of diverse lives equally.

I encourage all tech companies to (as a start):
-first be aware and honest about the current state of diversity on their engineering, product, and leadership teams. But awareness doesn’t provide solutions to the problem.
-Set trackable metrics and goals quarter by quarter. It’s important to bake-in inclusion requirements from day one and for it to be a key consideration, if not a priority, as the company and product develops.
-Product teams need to do early and iterative end-user testing. Constantly. And from day 1.
-Every employee at the company needs to make their voice heard even if they aren’t the ones designing or coding the product. Companies also need to create an ecosystem to encourage, listen, and to implement changes from these voices.

The lack of diversity in AI is not merely a social or cultural concern. It’s really a life or death safety issue. To build safer products that recognize the equal value and humanity of all people, we must first have diverse perspectives and voices from a diverse team. I refuse to believe that it’s harder to hire minority candidates than it is to build self-driving cars.

Originally published on Techcrunch

--

--