Deep Learning Racial Bias

The Avenue Q Theory of Ubiquitous Racism

Jacob
4 min readMay 8, 2016

Artificial Intelligence is kinda scary — Elon Musk, Steven Hawking and every Hollywood movie ever agree. Artificial Intelligence will one day [soon] grow out of control and then…violently rise against humanity….or something to that effect.

Still, after all the movies and all the predictions I’m left unconvinced. I’m not sold on the idea that it’s all but inevitable that AI will morph itself into a weirdly utilitarian, zealot robot dead set on the extinction of our species. That whole thing seems far-fetched.

Artificial intelligence may be “evil”, but those evils will be a lot more nuanced.

Unseen Evils

Last month we watched as Microsoft’s cool teen Twitter bot, Tay, devolved into a hate-spewing, racist internet troll. The bot was quickly taken down, and Microsoft apologized, saying they would “learn from” the experience. Certainly there is a lesson here, but it’s not that the internet is a terrible, terrible place which should avoided at all costs, or that we should expect all AI to be derailed by some rouge Twitter users. The lesson is that when computers are given access to unfiltered pools of data, their learning outcomes may be wildly unexpected.

Y I K E S

AI is only as good as the data it is fed; unfortunately, the data is only as good as we are. Our data is a reflection of who we are, which is why we consider it so valuable, worth protecting, keeping private. This is true of individuals and of society as a whole. We are afraid of what our data might tell others about us because data has a tendency to reveal ugly truths, truths we may have preferred to remain ignorant to.

One of the ugliest, most pervasive, and most ignored truths about our society is its continual mistreatment of minority groups. At this point, it seems like significant racial discrepancies with disproportionately negative effects on minorities can be found in almost any pool of data. To save you a few Google searches, some examples have been provided below.

It’s pretty appalling.

What do the hidden trends and patterns in our data mean about us as people, or as a society, what does it mean about the future we’re creating?

(Spoilers ahead)

Society is racist. Surprise.

In case you weren’t aware, systematic and structural racism is ingrained in the very fabric of today’s society. Silicon Valley is more comfortable with the term “unconscious bias.” Put simply, this means that in the US, minorities can expect to face more hardships than we would if we were, say, a 5'10", 150lb white American male.

The fear I have isn’t that AI will kill us all.

The fear I have is that we will create a future full of computers so subtly racist that nobody will notice. Nobody but those being discriminated against, of course. Worse yet, my fear is that these computers will get so advanced that we won’t even know how to ‘fix’ them. It’s hard enough trying get people to unlearn their racism, so how do we get a computer program whose decisions we barely understand to change its ways? Is there a racial sensitivity training for artificial intelligence?

When we talk about advanced AI, we often describe a computer that makes data-driven decisions in the world with which it interacts, but what happens when the data won’t hide the skeletons in our closets? If we’re not careful, we can expect AI to internalize some of our worst traits — our prejudices, biases and assorted ‘-isms’. We can expect to experience nuanced, but very real, manifestations of our own societal flaws.

At this point, artificial intelligence is still our brain child, a board game prodigy, but a child all the same. As it grows, we need to be careful about what parts of our human legacy we want to pass on to it. We want artificial intelligence to have a positive impact on our world, but that is neither guaranteed or easy. There is a lot of work to be done to insure that future doesn’t become a digital, algorithmic extension of our ugliest selves.

--

--

Jacob

Nervous optimist. Awaiting benevolent robot overlords. UMD and now UMich