The Racist(?) Autonomous Driving Car and the Dangers of Bias in Artificial Intelligence

The Word of Sam Huang
Predict
Published in
7 min readOct 16, 2018
Source: Freepik

I was riding in the back of an autonomous driving car of a well-known startup. As the car navigated its way through its suburban testing field, I watched the car’s mental processes play out on a tablet screen in front of me. Every time a pedestrian came into our vicinity, an alert would pop up on the tablet screen, indicating that the car had correctly recognized the humanoid presence. We passed dozens of pedestrians on the trip; upon encountering each one, the alert would go off on the screen, and our nerves would calm with the relief that this mechanical, thinking machine had spared another life.

There were two pedestrians that the car failed to detect. Two men. Both walking on the sidewalk like any one of the other pedestrians we had encountered. The only difference this time was their race. All the previous other pedestrians had been white. These two men the car had failed to detect were black.

Yes, ladies and gentlemen, I asked myself the question: was this car racist? It wasn’t the blaring, blood-boiling type of racism that you’d associate with the ethnic-slur-rantings of your typical madman, but something seemingly more than easy to dismiss as mere technical coincidence. I later spoke with a close friend who was an engineer working on autonomous driving. He explained that in the realm of autonomous car development, the ability of the car to detect persons with dark skin (in other words, those of certain minority backgrounds) was a problem that the industry was struggling to solve. The reason why was a matter of technical oversight: when teaching any artificial intelligence (AI) system to perform a task, you need to train it with experiences (i.e. data), so that it can learn from those experiences and understand how to act when it encounters a new experience. Accordingly, if you try to train your autonomous driving car with data that includes only light-skinned persons as examples of what constitutes a “human,” then the car will have trouble recognizing dark-skinned persons as also “human” when operating in real-life scenarios.

The utopian vision of artificial intelligence is that AI-enabled machines like autonomous driving cars can be logical systems untainted by the group prejudices and emotional sensitivities of their human overlords. Under this vision, it’d be possible to create a robot that acts completely objectively, unmarred by the divisive beliefs such as racism and sexism to which we humanoids are so susceptible. The precise danger of AI, however, is our modern tendency to view the scientific language of algorithms as completely unemotional and rational and thereby also immune to the worst parts of human nature. We trust in the banality of the lines of code, without understanding how we, in the act of constructing them, instill our own prejudices and belief systems in our AI offspring.

Indeed, as a growing movement of AI-critics have pointed out, the problem of AI is that such systems tend to reproduce the very human biases that they seek to eradicate. The reason is because to train the AI system to perform a certain task, you need to train it on data curated by bias-susceptible humans. If this data incorporates our biases, which it often surely does, then the machine will also “learn” our biases and make decisions governed by them. This does not translate to pure racism, sexism, or malice per se but rather reflects the social phenomenon called implicit bias — the range of attitudes and stereotypes that color a person’s belief systems and actions. Implicit bias can be favorable like the stereotype that Asians are naturally good at math or unfavorable like the stereotype that all Asians are submissive and quiet. The sinister danger of implicit bias is that it can penetrate our actions and decisions without us being conscious of it. That is, we can foster many biases without consciously harboring any malicious intent, even though such biases may result in us taking actions that have pernicious consequences for certain, mainly already marginalized, groups.

The automotive industry is not immune to these problems of bias that affect the broader AI community. Today autonomous vehicles can recognize light-skinned persons but have difficulty in recognizing humanoids of darker complexions. Taking this problem to its hyperbolic extreme, we’ve developed a car that carries a certain violent bias to accidentally run over disadvantaged minority groups over other socially advantaged racial groups. This is not the reality that any one of us could have intended, but it remains the present state of technology nonetheless.

How did this reality come to be? When the engineers tasked with compiling the data set of images of humans to train the autonomous car, they assumed that those millions of humanoid images that they had collected adequately represented the whole spectrum of human races. Most of these images were of light-skinned persons, too few were of dark-skinned persons, and these engineers, being likely either Asian or white, statistically speaking, did not question the breadth of the data set because they saw themselves in the images. What other reason would they have to question their carefully curated data set if their own world experiences had limited their interactions with those racially different from them? Had these engineers come from more racially diverse backgrounds, one would assume that they would have been less likely to curate a data set of only light-skinned humanoids and would have instead ended up with a data set that represented the diverse range of human complexions. A black engineer, for example, is more likely to have seen such a data set and object to its completeness, thinking hey, those persons in the pictures do not look like me, so they may not adequately represent the expanse of humanity. Instead, these engineers fell victim to their racial blindspots, which resulted in skewed data sets that overrepresented images of humanoids made in their light-skinned likeness and underrepresented those who did not look like themselves. It’s the quintessential problem of bias all over again.

The overarching lesson is that we must proceed with caution in developing AI systems to minimize biases in how such systems operate. In closing, I offer three guiding principles as humans forge on in the task to develop the next-gen AI machine:

  • Open Discourse: We cannot be shy about talking about the problem of bias, of addressing the issues of racism and sexism that pervade scientific systems. Silence on these topics, as uncomfortable as they may make us feel, will only result in the perpetuation of such problems in our AI systems and the failure to take the necessary action to eradicate them.
  • Greater Representation and Diversity Across Industry: The underrepresentation of racial minorities in science and engineering roles is a well-known problem. Today the majority of persons in scientific and engineering occupations are white (66.6% as of 2015). Asians trail some distance behind in representation at 20.6%, while blacks and Latinos show very low numbers at 4.8% and 6.0% respectively. Holding our AI systems accountable will depend on greater representation of women and disadvantaged minorities in the high-tech ecosystem.¹ These groups are more likely to see potential biases and ensure that the appropriate measures are taken to address them.
  • Black Box Problem: The algorithms that power AI systems are well-known to be black boxes — such complicated and convoluted sets of algorithms that humans have little to no understanding of how they work. This is particularly true for deep learning methods, where the number of computations a system makes can range in the millions upon millions. To ensure that the AI systems we create do not replicate our biases, we need to develop ways in which we can understand how AI systems work and arrive at their decisions. While a new wave of researchers has devoted themselves to solving this problem, we are still a long way from establishing accountability of our AI systems in reproducing biases and acting in ways that we do not intend.

I am under no delusions that this paltry piece of writing will do much to effect greater accountability in our AI systems. Nonetheless, I hope that, in the mere act of writing this, I leave you with a heightened feeling of comfort that there’s a growing movement among us industry folk who understand the dangers of our AI systems and that we’re working on it. That said, this effort will consist of constant tinkering and rethinking that will require our constant vigilance and critical feedback. Now more than ever, it’s time to resist.

[1] While I strongly believe in the need for greater diversity across the high-tech industry, I also strongly believe that these efforts must start earlier during young children’s educational development. However, I recognize the complexity of this topic and am under no delusions that I know how to fix it.

Feel free to add me and message me via LinkedIn. Always happy to exchange thoughts: https://www.linkedin.com/in/samantha-huang-10375b106/

Disclaimer: This blog represents solely the opinions of myself, not my employer.

--

--

The Word of Sam Huang
Predict

Principal at BMW i Ventures. VC trends. AI themes. Social commentaries. A personal blog bridging tech, business, and human issues by a curious mind.