Is demography destiny?

Geraldine Moriba
JSK Class of 2019
Published in
6 min readJul 15, 2019

How bias works in the 21st Century

Photo Credit: Tom Van De Weghe

What do you see in this drawing? Do you see a boy encouraging a girl to walk up the stairs? Or do you see a boy pushing her down the stairs? If the races of these children were flipped, would you see it differently? Bias is a prejudice that favors one person or group over another. What you saw at first glance was informed by your biases. (See Acknowledgment #1.)

Artificial intelligence isn’t a far off futuristic reality, it is already commonly used in all aspects of civil society today. It is informed by our biases. It influences our decision making, emotions and actions, as well as the privileges or penalties we might receive. The problem is that since these invisible systems are embedded with our biases, inequity and inequality are inevitable byproducts.

As a part of my John S. Knight Journalism Fellowship at Stanford University, I am a participating in research with Maneesh Agrawala, the director of the Brown Institute for Media Innovation, Kayvon Fatahalian, assistant professor of computer science, and James Hong, PhD candidate. We are exploring ways to use AI-based image, audio, and transcript processing techniques to analyze patterns and trends in content, bias, and polarization in TV news broadcasts, and eventually other platforms. Ultimately, we plan to release interactive web-based tools that will provide another way for journalists and data scientists to increase transparency around their editorial choices. This is one application of AI as a tool to identify expressions of our human bias specifically in journalism. But what about the biases in the algorithms?

Image Credit: Robert Adams

When programmers choose which attributes to include in predictive modeling tools and which ones to ignore, they can significantly impact the accuracy and results. Biases built into the actual algorithms and models are hard to detect and even more challenging to fix.

Automated decision-making gives authority and power to algorithms built by a narrow demographic of programmers who overwhelmingly share similar interests, experiences and opinions. Which means, algorithmic bias needs to be consistently identified with checks and balances. The biases of programmers informs the training data they use. In other words, bias creeps in long before data is even collected. Our personal perspectives inform the codes we write. The results can be unrepresentative of reality and a reflection of existing prejudices.

Here are real-world examples that reflect biases in algorithms and the resulting life consequences.

Is artificial intelligence helping or exacerbating our biases?

In response to mass shootings, schools and hospitals are deploying surveillance technologies with audio detection tools to monitor and identify aggression. Wired magazine and ProPublica have found that these tools don’t take words or meaning into account. Instead, they monitor tone and register rougher tones, like a strong cough, as aggressive. It isn’t able to identify potential shooters or violent aggressors who are outwardly calm or quiet. How do these tools make distinctions between tones that may be loud and clamorous, but not necessarily aggressive?

Amazon built an internal tool to help with gender balance in recruiting and pay equity. They used data from hiring decisions, which historically favored men over women. This included identifying verbs that are highly correlated with ranking men over women, such as “executed” and “captured.” In other words, the algorithm they built to fix this problem repeated the problem. It dismissed female applicants. Amazon stopped using this tool and disbanded the team that created it.

Across the country, there is a network of cameras that capture your face. Authorities are able to identify you in a crowd and trace your movements. The problem is that these algorithms are less accurate with African Americans, women and younger people, according to Center on Privacy and Technology at Georgetown Law, and this data can be used any number of ways. San Francisco is the first major city to ban the use of facial recognition surveillance technology.

A U.S. Government Accountability Office report found that some of the TSA full-body imaging technology machines used at airports “had a higher false alarm rate when passengers wore turbans and wigs.” ProPublica found that the same scanners also frequently give false alarms for Afros, braids, twists and other hairstyles worn by black women.

The sentencing and risk assessment software used across the country to predict the likelihood of someone committing a future crime is demonstratively biased against blacks. Bernard Harcourt, Columbia University professor of law and political science, found that these automated risk-assessment tools make racial disparities exacerbated by the criminal justice system more acute. Los Angeles has eliminated this program. Many courts still use it.

There is a large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon, according to Joy Buolamwini, a researcher at MIT’s Media Lab. “On the simple task of guessing the gender of a face, all companies’ technology performed better on male faces than on female faces and especially struggled on the faces of dark-skinned African women.” She also identified the potential for discrimination in online targeted advertising. (See Acknowledgment #2.)

UNESCO warns of negative consequences of Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google’s Assistant. They claim that female voice in the default settings of these personal assistants perpetuate the idea that “women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command.”

Local community councils in the United Kingdom are developing “predictive analytic” systems to predict child abuse and vulnerability to gang exploitation, with the goal of intervening before abuse or violence happens. It’s still to be determined if this system works. Beyond the obvious concerns of accuracy and effectiveness, this system raises questions about what a “safe” family looks like.

Can the biases in algorithms be fixed?

We all have biases and prejudices. The AI algorithms we create are informed by these biases, unconscious or not. Fortunately, some AI researchers are already trying solve this problem. They are developing algorithms to help mitigate hidden biases within training data. They are also developing algorithms that use more inclusive data, like Google’s Inclusive Images Competition, and evaluating algorithms with the Pilot Parliaments Benchmark (PPB), which is a data set specifically designed to evaluate biases in computer vision systems.

San Francisco has taken the step of implementing an automated system to make prosecutorial discretion less biased. They are using an AI “bias mitigation tool” that automatically redacts anything in a police report that might be suggestive of race, from hair color to ZIP codes. The goal is to reduce any possible racial bias among prosecutors reviewing police reports.

The good news is that the same ingenuity that is used to create these AI tools can be used to fix them as well. If we want technology to represent all of us, we all have to participate. It means gaining an understanding what algorithms can do and supporting pipelines to increase representation.

Acknowledgements:

  1. See Jones, Elaine F; Parker, Bonita L; M Holland Joyner; Ulku-Steiner, Beril. The influences of behavior valence and actor race on Black and White children’s moral and linking judgments. The Journal of Psychology; Provincetown Vol. 133, Iss. 2, (Mar 1999): 194–204.
  2. See Brendan F. Klare et al., Face Recognition Performance: Role of Demographic Information, 7 IEEE Transactions on Information Forensics and Security 1789, 1797 (2012).

If you have recommendations of ways to combat algorithm bias, I’d like to hear from you. DM me on Twitter @geraldinemoriba.

--

--

Geraldine Moriba
JSK Class of 2019

JSK Journalism Fellow at Stanford, Class of 2019. Emmy Winning Documentary Filmmaker. Writer.