Systemic Algorithmic Harms

Theories of “bias” alone will not enable us to engage in critiques of broader socio-technical systems.

Kinjal Dave
Data & Society: Points
5 min readMay 31, 2019

--

Why is one far more likely to hear about “algorithmic bias” rather than “algorithmic racism” or “algorithmic sexism,” even when we mean the latter? “Bias” is the latest chapter in the still unfolding history of social psychology, which has struggled to address why racism and other oppressive systems persist. In a recent historical review of this discourse, Dovidio et. al. (2010) describe how the term “bias” emerges from the academic literature. They begin with Walter Lippmann, who, in his book Public Opinion (1922), popularized the term “stereotype” in the modern sense to mean one’s perception of a group of people.

By revisiting how Lippmann’s “stereotype” is taken up as a theoretical term in social psychology, which leads to contemporary theories of “bias,” we can better understand the limits of using “bias” in today’s conversations about algorithmic harm. Because both “stereotype” and “bias” are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word “bias,” we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.

When we stop overusing the word “bias,” we can begin to use language that has been designed to theorize at the level of structural oppression.

To understand the causes of social divisions within the democratic society of his day, Lippmann needed to provide an explanation for why individuals hold fast to generalizations about entire groups of people, even when they are harmful to social harmony. He was clearly critical of stereotypes, defining them as “a distorted picture or image in a person’s mind, not based on personal experience, but derived culturally.” His concept remains faithfully cited in the literatures of social psychology, journalism, and political science, especially as these fields respond to concurrent social issues over time. In The Nature of Prejudice (1945), social psychologist Gordon Allport famously argues that we have stereotypes to rationalize our behavior toward an individual in a particular category. One definition in the contemporary psychology literature states that stereotypes are impressions which remain unchanged, even after being presented new information relevant to the conclusion.

Today, the social and cognitive psychology literature describes bias as something that is implicit and inevitable in our thought process as we categorize the world around us. The stereotype, therefore, is a foundational component of social psychology’s attempt to theorize why people engage in the kinds of behaviors that allow social hierarchies to persist.

Given this history, when we say “an algorithm is biased,” we, in some ways, are treating an algorithm as if it were a flawed individual, rather than an institutional force. In the progression from “stereotype” to “bias,” we have conveniently lost the negative connotation of “stereotype” from Lippmann’s original formulation. We have retained the concept of an unescapable mentalizing process for individual sensemaking, particularly in the face of uncertainty or fear—yet algorithms operate at the level of institutions. Algorithms are deployed through the technologies we use in our schools, businesses, and governments, impacting our social, political, and economic systems. By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.

By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.

In fact, Lippmann acknowledges that stereotypes are “derived culturally,” but he makes no real commitment to theorizing the role of culture and institutions, firmly situating his analysis at the level of individual perception. One can see how “bias” suffers from a similar theoretical deficiency, for example, in trying to identify “racial bias in algorithms.” In 1967, Kwame Ture (Stokely Carmichael) and Charles V. Hamilton coined “institutional racism” to refer to accepted social and political institutions of the status quo, which produce racially disparate outcomes. If algorithms function at the level of institutions, then they enforce policies of institutional racism within a structurally racist society.

As Camara Phyllis Jones argues, race should be considered using a framework including micro (individual), meso (institutional), and macro (systemic) levels of analysis. Bias as a term obscures more than it explains because we are not equally concerned about all biases for the same reasons. We specifically care about dismantling algorithmic biases that enable continued harm to those belonging to one or more historically oppressed social identity groups.

When we use “bias” to talk about inequalities extended by algorithms, we are, as Lippmann did, bounding our analysis at the individual — but theories of “bias” alone will not enable us to engage in critiques of broader socio-technical systems. Perhaps because we insist on using bias as the starting point for our critical technology conversations, we have been slow to take up Safiya Noble’s identification of “oppression” as the impact of technologies which stereotype. What would happen if we cited Kwame Ture and Charles V. Hamilton as faithfully as we do Walter Lippmann in the development of our theoretical frames?

Only a shift to institutionally-focused language will make room for systemic critique, allowing us to see more clearly what’s at stake when we talk about the future risks of the technologies we build and to identify who specifically experiences the harmful consequences of a technology, no matter how well meaning the technologist may be. And when we clearly name the problem, only then can we be held accountable to addressing it.

Kinjal Dave is a research analyst with the Media Manipulation Initiative at Data & Society. She is an incoming PhD student at the University of Pennsylvania’s Annenberg School for Communication.

--

--

Kinjal Dave
Data & Society: Points

Research Analyst at Data & Society Research Institute, Incoming PhD student at the University of Pennsylvania Annenberg School of Communication.