How Machines Inherit Their Creators’ Biases

A.I. doesn’t have to be conscious to be harmful

Jayne Williamson-Lee
Coinmonks
Published in
9 min readJul 9, 2018

--

When machines learn to process language, they inherit gender and racial biases from human writing samples.

In Turkish, there is only one pronoun “o” that is used to describe “he,” “she,” and “it.” When a Turkish sentence with the “o” pronoun is translated into English using Google Translate, the algorithm guesses which English pronoun to use — usually guessing “he” when the gender is unknown. It translates sentences like “he is a doctor” and “she is a nurse,” “he is hard working” and “she is lazy,” reflecting a gender bias. Many algorithms learn to process language by using human writing from samples like news articles and Wikipedia pages. From these language models, they create associations between words, some problematic like “‘he’ is to ‘she’ as ‘brilliant’ is to ‘lovely.’” With people’s implicit biases modeled through language, machines become trained in the sexism and racism predominant in our culture.

John Searle’s Chinese Room Argument proposes that a machine could mimic the behavior of a human being without understanding anything it said or did. There is mounting evidence that also shows machines can cause harm without understanding anything they say or do. A common misunderstanding about artificial intelligence, according to Stuart Russell, a computer scientist at the University of California, is that machines remain unthreatening, useful tools for our everyday lives as long as they are not “conscious.” Yet computer algorithms can have gender and racial biases similar to their creators’. Because algorithms’ use can range from filtering a pool of job applicants to advising how to sentence a person’s crime (based on others who have committed similar crimes), the consequences can range from denying a qualified woman applicant from a competitive job opportunity to sentencing an innocent person to time in prison. The results from using artificial intelligence biased against certain groups of people can be unfair and even immoral. Considering that A.I. doesn’t have to be “conscious” to be threatening, the advent of a biased and conscious AI should motivate people to work toward de-biasing algorithms so that they do not target based on gender or race.

The Chinese Room Argument demonstrates that a computer can communicate with people without understanding anything by using an analogy of a man in a room following an instruction manual to write messages to a Chinese woman. The man does not know how to speak Chinese but follows the instructions in the books to carry on a conversation with the woman. This scenario remains a thought experiment since it would be impossible to write instruction manuals with enough information for the man to communicate in Chinese, yet instruction manuals have been written for computers based on human writing. If the man in this analogy is the computer following the instructions of its code, and the code includes gender and racial biases, the man is likely to express these biases without understanding what he is saying. In order to de-bias computers, people must rewrite computers’ instruction manuals.

Sexist Language In Algorithms

A researcher from Microsoft, Adam Kalai, had a similar idea and partnered with Boston University to work to de-bias algorithms like Google Translate which have picked up on sexist language through human writing. Kalai and his colleagues targeted word embeddings, the pieces of code that computers use as a “dictionary” to process language. Word embeddings encode the relations between words into numbers. The words “sister,” “brother,” “mother,” and “father,” would cluster together as related words. Inputting one pair of words, like “he” and “she”, into a word embedding algorithm will output two words with the same relationship to “he” and “she,” such as “‘he’ is to ‘she’ as . . . ‘computer programmer’ is to ‘homemaker.’”

Kalai and his colleagues started with word embeddings because of how necessary they are for most programs to work. Computer programmers plug these algorithms into larger programs that are used to determine which search results, advertisements, and social media content we see. Programmers can find word embeddings already made, which saves them from having to develop the 300 dimensions of vector space that these algorithms use on their own, but taking them for granted without reworking the biases they exhibit can be costly. A biased algorithm used to narrow the number of prospective employees for a computer programming job will likely find men’s resumes to be better suited if the algorithm associates maleness more closely with computer programming.

Kalai and his colleagues have pioneered a new method to make algorithms ignore certain relationships and generalize to all cases where their relationships can be problematic. His team looked at words whose gender is dictated by the English language, like “actress” and “queen,” and problematic words that are gendered, like “nurse” and “sassy.” They were able to un-tag the problematic words, classifying them as neutral.

Words above the line have been un-tagged, classifying them as neutral, as they are problematic if gendered. Photo: Adam Kalai

It is hopeful that other computer programmers will catch on to Kalai and his colleagues’ work with word embeddings, but it may not be as clear how to de-bias other mediums in which machines can be biased. Biases also have been exposed in machines’ image labeling techniques, specifically facial recognition software, and suggestion algorithms used for advertising.

Racism in Facial Recognition Software

Google Photos’s software stores and organizes photos by recognizing objects and faces in pictures to categorize them based on people, animals, etc. The program’s facial recognition is not trained to recognize and differentiate people with darker skin, however. One man, Jacky Alcine, discovered this to his horror when one of his African-American friends was labeled in a picture as a gorilla. Google claims to have had employees of different races test Google Photos, but it is clear that the software still needs to be improved to be more inclusive and accurate about people with darker skin. It can be insulting and hurtful for people to see their friends and themselves be tagged as a different species, especially if this technology is used to justify racist rhetoric.

iPhone X Face ID software unlocked a Chinese woman’s phone for her co-worker as it could not distinguish faces of people of the same race, resulting in a privacy breach.

A similar case involved a Chinese woman, Hailing, whose iPhone X facial identification feature unlocked her phone after scanning her co-worker’s face. The Face ID software was advertised as having a “one in a million” chance that somebody else would be able to unlock the iPhone X, so it appeared that something was wrong with the camera feature of her phone. Hailing replaced the phone, but the new phone also unlocked when it scanned her co-worker’s face. In this case, the camera not being able to distinguish faces of people of the same race resulted in a privacy breach, which could have exposed sensitive information about her personal identity. The Face ID software needs to be reevaluated for people of different races to ensure users’ privacy rights are not violated.

Suggestion Algorithms That Perpetuate Stereotypes

Suggestion algorithms, used in Google and other search engines, can be trained by users to show advertisements that perpetuate racial and gender stereotypes. Google search will display advertisements of a company that archives criminal records when a name is searched that is generally associated with someone who is African-American. A professor at Harvard University who specializes in data privacy, Dr. Latanya Sweeney, searched her name and saw advertisements that read “Latanya Sweeney, Arrested?” despite the fact that she does not have a criminal record. She researched the likelihood for advertisements about criminal records to display and searched over 2,100 other “black” names, finding that these advertisements were 25 percent more likely to show when “black” names were searched in comparison to “white” names. It is hypothesized that the suggestion algorithm initially displayed the advertisement for both “black” and “white” names but learned users’ preference to click on the advertisements only when they searched African-American-sounding names. The algorithm learned the users’ biases and took it into account for subsequent searches. By showing advertisements for companies that archive criminal records when “black” names are searched, the algorithm now reinforces people’s biases by showing them what they might expect.

The danger of reinforcing people’s biases through technology remains a threat, but users’ activity can also show how they have internalized a stereotype. Other studies have concluded that women are shown more advertisements for lower-paying jobs than men, but this could be because women initially clicked on higher-paying job advertisements less, genuinely believing the advertisements did not apply to them. Using algorithms designed to customize users’ experience can result in discrimination when the algorithm takes into account people’s already existing biases.

Eliminating Bias and Prioritizing Objective Machine Operations

There are ways to de-bias algorithms, as witnessed with Kalai and his colleagues’ de-biasing of word embeddings, but measures taken to program algorithms to guard against bias have not been made a priority. Because algorithms have human-like biases which reinforce discrimination against certain groups of people, it is people’s responsibility to take preventative measures when programming algorithms to guard against bias. Perhaps conscious machines in the future would have the reflective capacities to recognize their own biases, but as it stands now computers are not capable of doing a feminist critique. Until computers carry out tasks free of bias and prejudice, they cannot be viewed as objective tools people can use to assist them in their everyday lives. The same principles that ensure objectivity in the scientific method can be applied to ensure objectivity in machines’ operations. We can ascribe objectivity to a machines’ operations if its outputs are processed in a nonarbitrary and nonbiased manner.

It is possible that unbiased algorithms would output results that people find inadequate. For example, when Google Translate encounters a pronoun in another language with an ambiguous gender, an objective translation might involve a blank space where the pronoun’s position is in the sentence to indicate that based on the limited context, the gender is unknown. While these results may not be satisfactory for users, it would be better for technology in many cases to admit to its shortcomings than to display a biased result. Features in Google Photos’s image labeling technology could be introduced that double-check the labeling results. If a gorilla is labeled in a picture among other pictures of only people, it is likely that the pictures were not taken at a zoo and thus the algorithm could conclude that it has inaccurate results. With the risk of labeling a person as an animal, it would be better for Google Photos to notify users that it is unable to identify the people and objects in a picture than to label it automatically as an animal. If algorithms cannot be immediately de-biased, it is advisable not to release them to the public until they have been thoroughly checked.

If programmers find that it is unfeasible to de-bias an algorithm, the algorithm should not be available for public use and should become obsolete. Computer programmers and advertisers may have to reconsider the nature of suggestion algorithms. While these algorithms are capable of attending to people’s consumer interests, it is difficult to discern how these algorithms could be trusted as objective when their very nature is to follow the preferences of the user. Implementing objective algorithms for advertising might defeat the purpose of targeting consumer’s particular interests but it would eliminate the risk of profiling or discriminating against people on the basis of whether they click on an advertisement or not.

There is a common narrative that encourages people to think that artificial intelligence will only pose a threat once it becomes “conscious” and turns against us. However, it is important to address the immediate consequences of computers exhibiting human-like biases, as their programming can discriminate on the basis of gender and race, creating unequal opportunity. Because we have instructed computers in our biases, it is our responsibility to de-bias them and equip them with the tools to operate objectively. Training them to guard against their biases could help train us out of our own.

--

--