Communities of Resistance: Fighting Algorithmic Injustice and Building Better Futures

Written by Kimberly Oula.

Illustration of a dark skinned Black woman wearing a mask, a black t-shirt, blue jeans and white sneakers. Her left hand is raised straight up in a Black power salute. She is shown in front of a web of lines on a blue-gray background.
Communities of Resistance // BlackIllustrations.com — The Movement Pack // Time Lapse — Pixabay

Can we escape the consequences of algorithmic injustice?

  1. When I’m prevented from attending my university of choice because the algorithms used to decide A-Level results disproportionately penalise marginalised groups that I’m part of?
  2. When I need to use facial recognition technology to sign in to work but it doesn’t recognise my brown skin and locks me out of work?
  3. When I’m unjustly racially profiled and added to the Metropolitan police’s discredited Gangs Matrix?
A flat monochrome drawing of a brain with dots and lines as connecting points.
Artificial Intelligence by monkik // Edits by Edafe Onerhime

An algorithm is “A finite set of unambiguous instructions that, given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.” — freedictionary.com

Algorithms that are used in the types of machine learning systems described above all learn from data that is socially and historically constructed.

Algorithms are not just simple pieces of code, they’re being embedded in more and more important systems and by extension, altering our social fabric by drawing on and amplifying human assumptions that are generated by humans (and embedded in datasets on which they are trained on). Portraying an algorithm as a simple piece of code that is wholly neutral and its predictions as wholly rational, disregards the assumptions and social contexts that make the code what it is.

In all three examples of harm mentioned above there is a dominant narrative of efficiency that drowns out voices calling for more equitable approaches to decision-making. Professor Ruha Benjamin beautifully articulates the shortcomings of this approach when she notes that prizing efficiency over equity produces containment. More often than not it is already marginalised groups that are ‘contained’. Much is made of the transformative potential of data. However, when its collection and curation mirrors the same extractive processes applied to resources like oil, producing capital for a few and not for all, how can we make the most of the promises of the digital age?

Gambling Futures

Illustration of a light skinned Black teenager sitting at a desk, wearing an aqua green t-short, dark trousers, and aqua green sneakers. His left hand props up his head while his right hand holds open a book. A stack of books are to his right and a backpack on the floor. He is shown in front of a web of lines on a blue-gray background.
Algorithmic Oppression // BlackIllustrations.com — The Education Illustration Pack // Time Lapse — Pixabay

In 2020, Ofqual decided that algorithmic decision-making was the best approach for deciding A-level grades for pupils who’d been unable to sit conventional exams on account of the ongoing pandemic lockdown. They decided on this approach instead of so-called Centre Assessed Grades (CAGs) because it addressed the “unprecedented increase” in higher A-level grades in 2019 (“CAGs at grade A and above were 12.5% higher than outcomes in 2019” ). In other words, algorithmic decision-making provided an ‘objective’ way to counter the threat that this inflation posed to the “market value” of higher grades. Furthermore Ofqual believed their preferred approach was fairer than accepting CAGs; they reasoned that:

“accepting CAGs would also mean that any leniency or severity in the CAGs submitted by individual schools and colleges would not be addressed. This would make it easier to get a grade at one school or college than another leaving unfairness between schools and colleges.(Quote — A-Level Awarding explainer by DfE),.

Ofqual was interested in addressing some forms of unfairness, it just didn’t consider all forms of unfairness as equally important. The key criticism of Ofqual’s algorithmic approach was that it penalised some high-performing students who attended schools with historically poor performance. This ‘quirk’ of the algorithm which would only affect pupils from marginalised groups was known and government ministers were informed but the decision was made to press ahead. Ofqual and government ministers chose to be guided by their success criteria that the attainment gaps between groups should not widen [as a result of their model] (OSR Review, 2021). Erstwhile head of Ofqual, Roger Taylor remarked at the time “Some students may think that, had they taken their exams, they would have achieved higher grades. We will never know.

In a similar vein, when assessing the equality impact of its approach to awarding vocational qualifications Ofqual recognised that patterns of attainment in the recent past, including any inequalities would be replicated. (OSR, Review Page 88).

It is telling which forms of unfairness Ofqual and government ministers were determined to address and which (more systemic) manifestations of unfairness it was willing to overlook.

Obviously this had implications for many students from marginalised backgrounds who were hoping to go to university but it doesn’t end there. The pandemic is having an unequal impact on school children. Many children from families with low incomes have lacked access to technology that would have made remote learning a viable option. A significant minority have struggled to access adequate nutritious food. Any decision-making that fails to take account of these structural disadvantages is not fair and any talk of equality is a smokescreen. In a society as deeply unequal as ours, we need frameworks that consider equity and not just ‘equality’. And this understanding must inform our approach to designing and selecting algorithms. Otherwise, we risk reinforcing existing forms of oppression.

While Ofqual and the Education Secretary, Gavin Williamson eventually backtracked on the use of algorithmically-generated grades, for many students the damage had already been done. Many lost university places and scholarships that were conditional upon their official grades before the policy reversal was implemented. A generation of students and their parents/guardians have now experienced algorithmic decision making first hand and have shown policy makers that decisions about individuals which are based on aggregate historical data is not always appropriate.

Preventing algorithmic harm

Algorithmic injustice is defined as the dispossession caused by the automation of existing or new forms of discrimination or harm towards an individual or a group.

Algorithmic bias sits within algorithmic injustice. Bias can enter anywhere within the algorithm’s lifecycle, as explained by AI researcher, Deborah Raji. Everything, from the underlying assumptions of the modeller to the ecological cost of large-scale models , needs to be critically reviewed with a view to preventing harm. The causes of algorithmic harms can’t be reduced to just the interpersonal (e.g biases of the designer), instead a socio-technical lens must be adopted.

Image from Deborah Raji ‘s tweet thread in which she explains the many sources of algorithmic bias. Image Source: Suresh and Guttag (2021)

What steps could we take?

So far we have considered some of the forms that algorithmic injustice can take in real life. The question we now have to ask ourselves is: what can we do about it? Below I explore some of the ways ‘we’ can do this. This is a broad and diverse ‘we’. Our communities of resistance must be diverse and operate on many fronts in order to tackle this multifaceted problem.

Algorithmic Auditing: Spotting the problem

Illustration of a monochrome magnifying glass with surrounded by spokes. At the centre is an eye.
Got Idea by Adlena Zhuvich // Edits by Edafe Onerhime

Algorithmic auditing is the process of reviewing an algorithm’s design, its code and its output in order to assess its impact in as holistic a way as possible. Effective audits adopt a number of techniques in order to properly assess an algorithm’s potential effects, for example, interviews and workshops with employees.(The Markup). Algorithmic auditing has enabled researchers to challenge organisations’ decisions. For example, it was this type of auditing conducted by Joy Buolamwini et al to flag the problems inherent in many facial recognition software with regard to darker skinned women (Gendered shades project). This led to improvements by some organisations.

Despite the obvious benefits of algorithmic auditing, it hasn’t been smooth sailing. Some corporations refuse to cooperate with researchers, forcing them to implement more labour-intensive approaches. Other organisations seek to force researchers to keep their findings private. For example, Ofqual asked Royal Statistics Society fellows to sign a non disclosure agreement (which the RSS refused to do) in order to be granted access to its now infamous A-level grading algorithm.. Furthermore, auditing can be used as a public relations cover by companies without any serious efforts being made to act on the resulting recommendations. (HireVue’s audit of hiring algorithm seen as a stunt).

Algorithmic auditing isn’t a silver bullet, especially if it’s left to companies’ discretion. Experience has shown that in these circumstances organisations will act to protect their profit margins. This is why it’s so important for us to view calls for de-regulation with healthy scepticism. Rather than stifling innovation and healthy competition, effective regulation can be a force for protecting consumers and the social fabric of our society.

GDPR: Rights of the data subject

Illustration shows a flat locked padlock icon in monochrome. Below is the text GDPR.
GDPR by Laura Reen // Edits by Edafe Onerhime

Enacting our rights under the General Data Protection Regulation (GDPR) can also be a safeguarding mechanism to prevent, or seek redress in the face of, algorithmic harms.

The non-profit organisation, Worker Info Exchange, which represented Uber and Ola drivers in their respective legal cases were able to do so thanks to the provisions of e Article 22 of the GDPR. This portion of the legislation relates to people’s rights with regards to decisions made solely by machines i.e. automated decision-making. In this particular case, Uber and Ola drivers were challenging cases of unfair automated decisions that impacted their pay, performance ratings and in some instances led to job dismissals (Worker Info Exchange, 2021).

The Dutch and UK courts ordered transparency for algorithms used in performance management and required Uber to reinstate the workers who were unfairly dismissed by algorithmic means.This case demonstrates how GDPR can help to fight against some manifestations of algorithmic injustice. This is why it’s important for us to resist recent proposals (by the Taskforce on Innovation, Growth and Regulatory Reform led by Sir Ian Duncan Smith) to do away with GDPR post-Brexit.

However GDPR does have it’s weakness, one of the cases brought against Uber was rejected by the courts that Uber’s decision to dismiss some of its drivers was not ‘solely automated’- a key condition of Article 22. Furthermore, despite the rights granted under article 15 –a right to access data — Uber was able to resist requests for all the data it held about its drivers because that data was linked to data about others i.e. passengers. This highlights another weakness of the current iteration of GDPR — its focus on /an individual rather than a group/societal view of data. Overall, law is fluid and needs interpretation the court decision for transparency, isn’t enough in itself to remove forms of oppression, it simply helps to reveal them.

Some legal scholars have different views as to the most appropriate legal frameworks for addressing algorithmic harm. For example, Monique Mann and Tobias Matzner argue that anti-discrimination law could play a better role against algorithmic profiling. The implication being that current legislation needs to go further to protect marginalised groups and wider society from algorithmic harm. It is important for marginalised communities to organise and begin to demand these changes if we want better futures that reduce the incidence and impact of algorithmic injustices.

Data and power

Generative uses of technology and data

So how can we build equity in the design of data-driven technologies and decision making?

One of the overarching approaches or frameworks that might help us think about how we achieve this objective is ‘generative justice’:

Illustration of a dark skinned Black family sitting on a sofa. A mother, father and two children A plant is to their left. The are facing foward. The father is showing holding a remote control with the daughter next to him, followed by the mother. The son is on her lap. They are shown in front of a web of lines on a blue-gray background.
Unwitting Data Workers // BlackIllustrations.com — Life is Good - Illustration Pack // Time Lapse — Pixabay

“The universal right to generate unalienated value and directly participate in its benefits; the rights of value generators to create their own conditions of production; and the rights of communities of value generation to nurture self-sustaining paths for its circulation .”(Eglash, R (2016)).

In the case of data and data-centric technologies this means, data about us should be for us. We are data workers every time we engage with digital platforms or fill out administrative surveys. We share our data knowingly (or unknowingly) and our data is transformed into value for private and public sector organisations. Value generation does not begin and end with a data scientist, it begins with the people who the data is about.

Data stewardship

Diagram explains how data trust works,
Image from Exploring legal mechanisms for data stewardship — a joint publication by the Ada Lovelace Institute and the AI Council.

A data trust “provides independent, fiduciary stewardship of data’. Trustees look after how data is used. Inspired by common law practices of a land trust; the idea of a data trust is to pull collective rights of consenting individuals (data subjects) to be advocated on behalf of a data trustee. — Ada Lovelace

Both auditing and legislation empower by preventing harm or facilitating redress after harm has occurred. This is important but insufficient for full flourishing. We also need approaches that enable more equitable distribution of power in the first place. Let’s consider, those Uber drivers again. In response to one of the Dutch courts’ rulings, Workers Info Exchange pointed out that the “burden of proof on workers to show they have been subject to automated decision making before they can demand transparency of such decision making.” is unfair (ref). This position sustains the power of the institutions rather than the workers, ideally the burden of proof should be on the companies.

Would having more say with regards to how their data is used by their employer introduce greater better accountability on the part of the latter ? I think there is a strong argument for believing that this would be the case. Data trusts are one way that these workers can positively increase their power by asserting greater control over the use of data about them.

Worker’s info Exchange is establishing a data trust, to provide collective means for gig workers to exercise their rights under GDPR. Taking companies to court can be extremely expensive. Acting collectively is more effective and systems such as data trusts help to build the type of solidarity that makes this type of cooperation more likely. Similarly, the organisation, Driver’s Seat has created a data co-operative in which drivers upload their data and this e is used to generate insights about their working patterns which are then made available to them. It’s also sold to interested parties, with the profit shared amongst the co-operative members

You may be wondering why I have highlighted a case study that focuses on workers’ rights, rather than on racism. You can’t separate class from race. Well, for one thing, people of colour are over-represented in Uber’s workforce so there is a racial component to this case as its material impact falls disproportionately on some groups. Also, this particular case study provides useful insights and lessons about how data-centric technologies can be deployed in ways that further marginalize people with who are already disempowered.

Creating counterdata

While data stewardship models facilitate better control and transparency with regards to access to important data sets, they don’t necessarily address the process of data creation or the quality of the resulting data. How and why data is collected matters too. To help make this point, let us consider the following question: is it possible to build an “anti-racist” ‘gangs matrix’?

In the context of the UK, the answer is no. The very conceptualisation of ‘gangs’ is highly racialised. In their State of the Nation report, the Runnymede Trust discuss the racialisation and criminalisation of ethnic minorities (pg 66) thus:

“The concept of the gang and its relationship to ethnic minority groups, particularly Blackness, has an extensive history in Britain (Gilroy, 1987a; Keith, 1993). Williams and Clarke (2018) have argued that the more recent Minority ethnic groups, policing, criminal justice system focused by policymakers and politicians on the gang has resurfaced after the social unrest in England in the summer of 2011. Labelling the social unrest as a product of an endemic gang problem in the country led to a flurry of anxieties where ‘the media, politicians, think tanks and academics were quick to evoke the already established view of the gang problem’ (p.7)”(Runneymede 2020)

Given this social context, it’s no surprise then that the Met Police’s Gangs Matrix unjustly racially profiled athousand Black men, whose names had to be removed from the database in early 2021. This is why it’s so important for us to understand how power manifests in the social constructions of the databases that fuel the modern ecosystem of machine learning systems. Technology won’t build “safer” streets if it simply further marginalises certain communities.

Collecting counter data provides a means of scrutinising the ways in which power operates when it comes to data creation. It does this by presenting an alternative view of the same set of interactions. This is what Account, a Hackney-based, youth-led police monitoring group did with their report, ‘Policing in Hackney: Challenges from Youth in 2020’.

Creating counter data helps us to point out weaknesses in current processes of data collection and curation (as well as in the legislation that underpins them) . It isn’t an end in and of itself but it can provide communities with data they desperately need to begin to address local challenges in a more informed way. It can also serve as a good basis for better conversations and a way of reimagining better futures.

Conclusion

There are many ways to tackle algorithmic harms, from insisting on greater accountability from the corporations that release these products and services to data nuions and the creation of counterdata. Each approach has strengths but no single approach can tackle all the facets of algorithmic injustice. Only by fighting this problem on many fronts i.e combining approaches, will we be able to actualise generative justice.

Supporting one another in “communities of resistance”, can help us to learn from one another and work more effectively. I want to see a world in which data about us is truly for us.

--

--

Data, Tech & Black Communities
Data, Tech & Black Communities

DTBC is a group of diverse Black/Black heritage people working together to ensure data & data driven-technologies enhances rather than curtails Black lives