The Urgency of Moving from Bias to Power

Catherine D'Ignazio (she/ella)
Data + Feminism Lab, MIT
7 min readMay 16, 2023

This is reproduced from the foreword I wrote for the European Data Protection Law Review’s special issue on Data Bias & Inequality, Volume 8 (2022), Issue 4.

Data, algorithms and artificial intelligence are quite literally everywhere. They are being mobilized in such diverse areas as health, housing, culture and media, urban planning, policing, transportation, education and more. As these novel applications of AI in different domains have multiplied, so have reports and investigations of the harms caused by or exacerbated by them. Risk assessment algorithms give higher risk scores to Black people accused of crimes (1). Residents of Michigan are awaiting the return of more than twenty million dollars that was stripped from them due to a faulty fraud-detection algorithm (2). A child abuse detection algorithm disproportionately targeted poor parents (3). Resume screening algorithms consistently demote CVs from women applicants (4). Until recently, Google search returned results filled with pornography and stereotypes when users typed in “Black girls.” (5)

To any scholar of feminism or critical race theory these harms come as absolutely no surprise. These impacts are entirely foreseeable and predictable, right here and right now. What has been most surprising for those of us working in fields that seriously engage with power and inequality is how surprised the rest of the world is by the very basic ideas about data that are now coming to light. For example: that data are not neutral; that data are shaped by interests, often financial ones; that data can cause and exacerbate harm, including physical violence; that data take resources and capital to collect, store and maintain; that datasets and algorithms are artifacts produced by humans; that when a racist and sexist and colonial society generates data, those data will embody such racism and sexism and colonialism.

These basic observations about datasets and algorithms led Lauren F. Klein and I, in our book Data Feminism, to make the case that critical thinking about data and AI needs to move beyond the notion of bias and into a deeper engagement with power. Here, power means the structural organization of privilege and oppression in a given society, in which some groups experience unearned advantages and other groups experience systematic and violent disadvantages. Specific manifestations of privilege and oppression include but are not limited to: cisheteropatriarchy, settler colonialism, white supremacy, and ableism. These are forces that structure the organization of society at multiple scales: from laws and their implementation, to culture and media, to interpersonal interactions (6). Bias, on the other hand, is often conceived of at the scale of an interpersonal interaction (e.g. a sexist comment) or a specific harmful event between an individual and a company (e.g. a facial recognition technology incorrectly targets a person of color). In contrast, Klein and I build on the legacy of intersectional feminist theory to make the case that we must think about power as structural and multiscalar in order to be able to address the root causes of discrimination and inequality (7,8).

When we understand power as structural and multiscalar, we can see clearly that the default setting for data and technology will be one that bolsters and upholds existing power structures. Women will be subordinated. Racial and ethnic minorities will be over surveilled. White people in the Global North will amass more money and property and control. Transgender people will be erased or targeted. Indigenous land will be expropriated for extractive industries. Low-income people will be preyed upon. Democracies will be literally sunk so that Meta can make a buck (9,10). And indeed that is what is happening.

But it does not have to be like this. Feminist theory and critical race theory do not only teach us how to examine power, they also teach us how to challenge power. While Big Tech prides itself on disruption, it is high time to disrupt it right back, wielding the tools of a democratic society: law, policy, labor organizing, protest. Corporations that mobilize data and AI have proved themselves fully incapable and unwilling to address human rights transgressions and threats to democracy because to address them would interfere with making money, plain and simple. As the American abolitionist Frederick Douglass famously said in a speech given in 1857: “Power concedes nothing without demand. It never did and it never will.” (11)

Holding corporate and government actors accountable through regulation is imperative and it is happening, slowly, as evidenced by recent developments such as the White House’s AI Bill of Rights or the EU’s AI Act (12,13). But regulating datasets, algorithms and platforms is not straightforward, as readers of this journal well know. As we work in this space I’d like us to keep one question in mind:

Who bears the burden of proof for data-driven harms?

Too often, legal approaches prioritize the “bias” model of proving harm. Harm is conceived as something that is perpetrated by an intentionally bad actor and something that happens to an individual person. There are three things wrong with this model. First, feminism and critical race theory point to the structural nature of harm — racism exists outside of the racist intention to discriminate. Thus, a company could be operating in good faith and still exacerbate racial stratification. Second, while individuals are indeed harmed by data-driven systems, those harms are not isolated to those individuals. Because of the power of data to aggregate and classify, data-driven harms will almost always be group-based harms, experienced by individuals who are alike on some axis of similarity: single mothers, Indigenous land defenders, drag queens, farm workers, and so on. Thus, addressing a problem for one individual does not address it for the whole group who is being discriminated against. Third, addressing harm happens retroactively and is undertaken by an individual. This is evident in the recent AI Liability Act where an individual who can prove they were harmed by an algorithm can then sue the company (14). But which everyday citizen amongst us has the means or the time or the expertise to prove algorithmic harm? Why must we patiently wait to be harmed before we are able to take action?

Operating from a “power” mindset would flip this script. Again, remembering that the default setting for an unequal society is going to be more inequality, this acknowledges that institutions who wield large datasets and algorithms will cause harm and will exacerbate inequality unless preemptively required to do otherwise. The power-aware approach places the burden of proof on corporations and governments to prove safety, rather than on individuals to prove harm. One interesting proposed mechanism for realizing such an approach is the algorithm impact assessment, proposed to audit algorithms for disproportionate effects throughout the lifecycle of development (15).

Yet at the same time, there is more than just internal and pre-deployment technical evaluation that would need to be undertaken if we are assessing group-based risks and harms of algorithms. For example, a predictive policing algorithm in a lab may be tweaked to not intrinsically discriminate against racial and ethnic minorities. And yet, when it is deployed in a context like the United States, such an algorithm would enable and exacerbate existing and deeply entrenched racist policing structures. This is to say that it is not possible to evaluate algorithms using purely statistical or technical methods. We must consider the context and history of their deployment environments, and evaluate risks accordingly. We must keep the option on the table that some technologies, when married with some deployment environments, are a toxic nightmare for minoritized people. In these cases, there should be paths to moratoria and outright bans.

The stakes are high. Meeting this moment requires the creativity and tenacity to struggle against the debilitating concentrations of capital and political power held by data companies. But as Frederick Douglass also said, “If there is no struggle, there is no progress.” The struggle is now and demands our urgent imagination.

References

1. Kirchner, J. A., Surya Mattu, Jeff Larson, Lauren. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016).

2. Angwin, J. The Seven-Year Struggle to Hold an Out-of-Control Algorithm to Account — The Markup. https://themarkup.org/newsletter/hello-world/the-seven-year-struggle-to-hold-an-out-of-control-algorithm-to-account (2022).

3. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. (St. Martin’s Press, 2018).

4. Gershgorn, D. Companies are on the hook if their hiring algorithms are biased. Quartz https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/ (2018).

5. Umoja Noble, S. Algorithms of Oppression: How Search Engines Reinforce Racism. (NYU Press, 2018).

6. Collins, P. H. Black feminist thought: Knowledge, consciousness, and the politics of empowerment. (routledge, 2002).

7. Crenshaw, K. Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. 43, 1241–1299 (1991).

8. Collins, P. H. Intersectionality as critical social theory. (Duke University Press, 2019).

9. Reuters. U.N. investigator says Facebook provided vast amount of Myanmar war crimes information. Reuters (2022).

10. Facebook Has a Week to Fix Pro-Genocide Ad Problem in Kenya. Gizmodo https://gizmodo.com/facebook-kenya-pro-genocide-ads-hate-speech-suspension-1849348778 (2022).

11. Douglass, F. (1857) Frederick Douglass, ‘If There Is No Struggle, There Is No Progress’ •. (2007).

12. White House Office of Science and Technology Policy. The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. 73 https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf (2022).

13. European Commission. Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. (2021).

14. Heikkilä, M. The EU wants to put companies on the hook for harmful AI. MIT Technology Review https://www.technologyreview.com/2022/10/01/1060539/eu-tech-policy-harmful-ai-liability/ (2022).

15. Raji, I. D. et al. Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 33–44 (Association for Computing Machinery, 2020). doi:10.1145/3351095.3372873.

--

--

Catherine D'Ignazio (she/ella)
Data + Feminism Lab, MIT

Associate Prof of Urban Science and Planning, Dept of Urban Studies and Planning. Director, Data + Feminism Lab @ MIT.