Is technology racist?

Katharina Buiten
tech2impact
Published in
6 min readJun 6, 2020

Technology is normally seen as a tool to help solve problems. This is also one of the reasons why we do what we do, as we see the potential for the positive impact of technology. But of course, where there is light, there also must be shadows. Technology by itself is neither bad nor good — it is a tool that is designed and used by us, humans. Though if we are unaware of our perspectives and biases, there is a high chance that the technologies we develop are designed for a specific group of people and exclude others. As the people working in the tech industry are still a quite homogenous group, it is no surprise that algorithms and technologies often lead to “racist bugs”.

What are “racist bugs”?

In the past years, there have been many cases, where technologies didn’t work as they should — showing major algorithmic biases. Being human means being error-prone and biased and so can be the algorithms we produce. Therefore, when society defines and frames people of color as “the problem”, solutions like facial recognition technologies tend to target criminal suspects on the basis of skin color.[1] To give an example: A dutch system attempts to infer how likely children under 12 are to become future criminals. The algorithm puts overwhelmingly black and brown men and boys on those databases.[2] Unfortunately, those examples aren’t individual cases, there are many more:

  • HP’s webcams, which came with facial tracking software, could not detect dark-skinned faces.[3]
  • Google Photos algorithmically identified black people as gorillas.[4]
  • Microsoft’s Twitterbot Tay designed to learn from conversations on Twitter became racist and anti-feminist within 24 hours.[4]
  • Snapchat offered a filter that rendered users as an offensive Asian caricature.[4]
  • Caliskan and colleagues published a paper that finds that while a computer teaches itself English, it becomes prejudiced against black Americans and women.[5]
  • Kodak’s color film was specifically developed to make white people look good — the “Shirley cards”.[6]
  • Nikon Coolpix digital camera flashed a warning that someone had closed their eyes in the photo when the Taiwanese-American customer smiled.[6]
  • FaceApp’s filter to make people “More Attractive” meant “whiter”.[6]
  • Automatic taps and soaps dispensers are unable to detect darker skin tones.[7]
  • Speech recognition technologies developed by Amazon, Google, Apple, Microsoft, and IBM make almost twice as many errors when transcribing African American voices as they do with white American voices.[8]
  • Google “Picture” search is very white and non-inclusive — try it yourself and google “grandma” [World White Web].[9]

The root of algorithmic biases

Machine learning-based systems are trained on data. When talking about machine learning, it’s better to think about the idea of training. This involves exposing a computer to a bunch of data — any kind of data — so that the computer learns to make judgments, or predictions, about the information it processes based on the patterns it notices. Sounds pretty straightforward? Yes, but often the data on which many of those systems are trained or checked is not complete, balanced, or selected appropriately, and that can be a major source of algorithmic bias. [10]

One fundamental problem of machine learning is that it works on old data and on training data — it doesn’t work on new data, because we haven’t collected it yet. This means that the AI learns about how the world has been and picks up on status quo trends. It doesn’t know how the world should be.[5] Another issue is that in some cases, it might be even impossible to find training data free of bias. Take historical data produced by the United States criminal justice system. It’s hard to imagine that data produced by an institution suffering from systemic racism could be used to build out an effective and fair tool.[10]

“You got the algorithms which are super powerful, but just as important is what kind of data you feed the algorithms to teach them to discriminate, if you feed it with crap, it will spit out crap in the end.”

Alexander Todorov, psychologist and facial perception expert

The question we have to confront is whether we will continue to design and use technologies that support racism in a direct or indirect way.

The checklist: how to design your tech responsibly

Be aware of your bias and background
As with all important changes, the first and biggest step is awareness. Therefore, reflect on your own background, privilege, and bias. It also makes a lot of sense to seek out perspectives from people who have a different background. Projects implemented with a biased worldview can end up reinforcing the asymmetric power dynamic that causes social inequality.[11]

Know your community/user and work with them
The first rule of product development in tech is to know your user/community. That should go without saying. What isn’t so obvious, is the problem of working on a project on behalf of a community they don’t belong to. Before embarking on a new project, it’s crucial to obtain support, feedback, and consent from the community.[11]

Keep in mind where you are operating
Coming back to the chatbot disaster of Microsoft. Tay was a chatbot programmed to learn and relearn from humans. Bots get better and more accurate the more training they get. If a chatbot receives more racist questions, it will become more racist. But data scientists have the ability to blacklist specific words, so they can never become part of the bot’s vocabulary. Microsoft did not consider the problematic Twittersphere and didn’t blacklist the racist language used. They didn’t take into account where their technology would be operating. That’s why within 24 hours Tay turned into a racist anti-feminist.[9]

Create diverse teams
One reason for biases in technology is the missing diversity within the tech teams. In order to create inclusive technology, it is necessary to check recruiting processes and make diversity in your company a priority. It will not only have advantages for your algorithms but will also improve your overall work culture.
Data insights on tech staff: [14]
– Google (2016): 19% are women and just 1% are Black
– Microsoft (2016): 17.5% are women and just 2.7% are Black
– Facebook (2016): 17% are women and just 1% are Black

Check the output of the programs
Humans using machine learning programs shouldn’t blindly trust the technology but rather regularly question why they are getting those specific results. They need to review whether the data they are looking at is reflecting historical prejudices or not. It is important that the people who use those programs do not assume that a computer can produce a less biased result than a human.[5]

Existing tech-solutions to fight racism

The impact of technology highly depends on its design and use. Therefore, many technologies have the potential, sometimes unrealized, to eliminate or reduce racism and bias.

An important part to reduce racism is to educate ourselves. A good opportunity to do so is to enroll in an online course and learn more about racism and bias. You can check out different platforms offering MOOCs on the topic — edx offers a course on “Bias and Discrimination in AI”. An alternative is the use of VR, where people can put themselves in the shoes of others and experience what it is like to be affected by racism and exclusion.[12]

While big data definitely contains the potential for unfairness and biased machine learning, it is also a critical tool for finding and fighting discrimination. By using big data’s sophisticated predictive tools to examine large, dynamic databases, organizations are increasingly able to expose and address patterns of inequality. When used responsibly, big data can continue to help us prevent discrimination, empower vulnerable groups, and promote equality.[13]

So, how can technology be more inclusive?

“Any technology that we create is going to reflect both our aspirations and our limitations. If we are limited when it comes to being inclusive that’s going to be reflected in the robots we develop or the tech that’s incorporated within the robots.”

Joy Buolamwini

Suggested talks & books
Code4Rights, Code4All | Joy Buolamwini | TEDxBeaconStreet
Experiencing Racism in VR | Courtney D. Cogburn, PhD | TEDxRVA
Race After Technology by Ruha Benjamin

Experts on the topic
Joy Buolamwini [Algorithmic Justice League]
Timnit Gebru

Solutions, specifically focusing on reducing racism
App: Everyday Racism
Platform: Mapping Police Violence
AI-based software to analyze stereotypic content: Develop Diverse

--

--

Katharina Buiten
tech2impact

Sustainable development graduate with a passion for new technologies. Trying to live a more conscious life in every aspect - taking care of myself & our planet.