How AI is Wrapped Up in a Web of Injustice

Despite the headlines, artificial intelligence is no silver bullet to society’s ills. In many ways, it’s actually part of the problem.

Digital Freedom Fund
Digital Freedom Fund
6 min readOct 23, 2020

--

Artwork by Cynthia Alonso

Artificial intelligence is no futuristic, dystopian notion. When we hear “AI”, we often think of sci-fi: Arnold Schwarznegger’s fierce Terminator, the machinic overlords from The Matrix, or the pitiful boy robot from Spielberg’s frankly titled AI.

But AI is not a distant reality, and in most cases, it looks nothing like a robot. AI is very much with us in the here and now, and it’s already determining our lives in ways we are not even aware of.

AI has a host of different names and applications, including machine learning, automated decision-making, algorithms, computer vision, and facial recognition. And it’s already calling the shots across many spheres of everyday life, from who gets social welfare benefits to which areas get policed most.

AI is deployed by states, public authorities, and private companies to profile us and make decisions about our credit scores, whether we should get discounts on bills, or whether we’re likely to commit fraud. Ran through an AI, individuals are boiled down to probabilities derived from systems that have taken on, and in many cases proliferated, the bias of their human creators.

There exists a beautiful idea about technology that is exemplified by early, utopian visions of the internet: of a free and open network that would allow unbridled access to knowledge for all, that would flatten the world and stamp out inequalities in a way never before seen.

…many look to AI as a saviour that could offer impartial, unbiased judgements about the world that us flawed humans never could

Similarly, many look to AI as a saviour that could offer impartial, unbiased judgements about the world that us flawed humans, riddled with prejudice and assumptions, never could.

But this is just a fantasy. It’s not possible to simply untie current AI systems from society’s intricate web of inequality and injustice, and AI has ended up being a major part of the problem, rather than the solution.

The entire infrastructure behind AI is owned by few and inaccessible to many. Dr Seda Gürses, an Associate Professor in the Faculty of Technology, Policy and Management at TU Delft, explains how much developments in AI depend on tech giants, who have monopolies on the machinery needed to make breakthroughs and effect change.

“If you’re using the same infrastructure set up by Amazon and Google … to do your research, at some point it’s unclear what in the world you’re doing,” says Dr Gürses.

“Are you just producing ways for those companies to improve their product, or are you really doing something that would serve the public? And what kind of social norms and assumptions that have been baked into those infrastructures do you then confirm and reproduce in the world?”

“Are you just producing ways for those companies to improve their product, or are you really doing something that would serve the public?”

The environmental impact of this is far from neutral, too — after all, the materials needed to run such powerful machines must come from somewhere. Many argue that the extraction of the raw materials needed to fuel these technological behemoths depends heavily on the oppression and exploitation of poorer countries and the people who live there.

Options independent of these all-powerful global actors are, unfortunately, in short supply.

“Even the US government is not building their own data centre, but they’re buying compute from Google and Microsoft and Amazon,” Dr Gürses explains.

Then there’s the problem of accountability thrown up by the complexity of these infrastructural systems. Just as it is often impossible to hold individuals accountable for human rights abuses and gross injustices in long, transnational supply chains, the same is true with AI.

“The infrastructures are set up in a way where nobody feels responsible for the parts. But when they come together, they can really harm people,” says Dr Gürses.

Just as it is often impossible to hold individuals accountable for human rights abuses and gross injustices in long, transnational supply chains, the same is true with AI

“When an individual is making a decision in an institution, you can look them in the face and say this is unfair. You can go to their institution and complain,” she explains. But, faced with huge computational power structures, things aren’t so simple. When things inevitably go wrong, the buck arguably doesn’t stop with whoever deployed the technology.

“Who are they going to be hold to account?” asks Dr Gürses. “Is it the company in Japan? Is it the person who first downloaded the dataset in an unconsented way? Is it the one who never took any metric except accuracy into mind, and therefore fairness was out the window?”

Further to that, when it comes to machine learning, it’s also very difficult to know how or why a decision was made: such is the nature of a system that learns by itself. This is a serious hindrance to transparency, as it becomes increasingly tricky to challenge decisions we don’t agree with, or to hold individuals to account for the mistake.

Added to the long list of injustices is the fact that the long and onerous task of training many AI algorithms is often done by low-paid workers. In many cases, low-wage workers must also expose themselves to disturbing or violent content to train up the content moderation or hate speech algorithms.

Under exploitative work conditions, these people often have milliseconds to identify potentially problematic content. Inevitably, they end up making errors or encoding their own biases.

Removing biases such as gender or race isn’t just a matter of hitting delete or feeding machines cleaner data. It is impossible to pick out the inequalities that are tightly woven into the fabric of our society, and the same is true for algorithms. Even when “race” or “gender” are not explicitly mentioned, certain labels can act as proxies for these social categories, and the bias ends up getting replicated just the same.

The tool was not taught explicitly to discriminate on the basis of gender, but simply learned by itself that the word “women” was associated with lesser quality CVs

Take, for example, the Amazon recruitment tool that was scrapped for being gender biased. The tool was not taught explicitly to discriminate on the basis of gender, but simply learned by itself that the word “women” was associated with lesser quality CVs.

The injustices of AI are not merely what’s written in the code. The issue of power is omnipresent: the current infrastructure gives big players enormous leverage over the smaller ones, who have little power to challenge what is being done. That is the reality of AI in a world where big tech holds enormous infrastructural power, and states and other actors are increasingly turning to technology to exert greater control over populations.

“Computer science can come up with abstract solutions to problems by abstracting away the complexity of the world,” says Dr Gürses. “That is inherent to computer science as a field or as a method.”

Recognition of computers’ social impact is often missing from a process which, by its nature, is intent on solving a singular problem. But as AI’s place in our society proliferates, this is a gap that simply must be bridged.

Recognition of computers’ social impact is often missing from a process which, by its nature, is intent on solving a singular problem

“Hopefully they [AI infrastructures] will also improve people’s lives,” says Dr Gürses. “I don’t want to be dismissive of its power to reconfigure and maybe even empower us sometimes.”

“But there’s a bigger picture of things that need to change,” she says. “…for technology to be used in a way that we can say: oh, that’s not a very unjust use, that’s a caring use, and a use that serves communities and societies to move forward in life.”

--

--

Digital Freedom Fund
Digital Freedom Fund

The Digital Freedom Fund supports partners in Europe to advance digital rights through strategic litigation. https://digitalfreedomfund.org/