Fearless Futures
Published in

Fearless Futures

Why we need a movement for justice in AI not ethics in AI

As the AI revolution gathers pace and influence, there is an increasing focus on “ethics in AI” as sociologists, ethicists and technologists battle to inform its progress.

As I have read article after article on the subject of ethics in AI, I have been struck by the alarming absence of what harm actually means in the context of AI: oppression.

As an anti-oppression education organisation, a notion that consistently emerges in our work at Fearless Futures is that how we frame a problem informs how we come to solve it. While ethics is a wide and diverse field, our suspicion has been that unless we have a language that speaks to the root issues at stake when it comes to AI we will get nowhere.

Does “ethics” in its mainstream sense — doing good? — cover what it is required of technologists, policy makers, legislators, and funders to solve the problems described here, for example? Not really.

In our view, the root issues must be that structural oppression is in existence across our communities and societies and without active transformation of power relations AI will perpetuate, reproduce and amplify this harm. And if our conception of the problem isn’t framed in this way then our efforts will fail.

If there is a disease of the body and our discourse is centred on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body.

If there is a disease of the body and our discourse is centred on the person’s chipped nails, then there may well be recommendations for a manicure, but we probably won’t heal the body.

If we are prepared to dig in and acknowledge the disease of the body, then we will do anti-oppression work. In my view then, the quest is for an AI of justice not an ethical AI.

I am not an ethicists, so I decided to reach out to Dr. Arianne Shahvisi at the University of Brighton to discuss these very questions with her. She was so erudite and powerful, I thought it would be simplest to share an excerpt of our exchange below.

ME: I am trying to get my head around why people have focused on a narrative of “ethics” in AI rather than anti-oppression or justice in AI. What’s going on here?

DR. SHAHVISI: Ethics deals with right and wrong, fair and unfair, just and unjust, but it is traditionally employed in ways that manage to avoid discussion of oppression. I know that will sound ridiculous and implausible to you, but unfortunately that’s how it is. I suspect it is a relic of those who have been most influential within the discipline: wealthy white men, usually from a long time ago (the proverbial “pale, male, and stale” writers who are the bulk of philosophy reading lists) who really did/do feel like fully individual efficacious agents in the world, and do not think beyond that positionality. So, when people use “ethics” in an applied sense (“medical ethics”, “business ethics”) they typically refer to the rightness/wrongness of an interaction between two individuals i.e. a doctor and a patient, a researcher and a participant, a service-provider and a client. Ethics is very often highly individualised and atomistic, very libertarian, and is applied without consideration of power or structural factors. So when someone asks you to consider AI ethics, they will typically be considering individual misuses of the technology, e.g. weaponisation, data protection issues, or an individual robot being treated badly.

ME: Hmm, that appears to be what I’ve been seeing broadly speaking. There must be ethicists that do focus on anti-oppression though, right?

DR. SHAHVISI: Yes, as with most generalisations, there are exceptions. Not all ethics is conducted in this ridiculous way, and there is scope for it to include, and even centre, structural considerations. That’s what I try to do in my work, and that’s what others working in the philosophy of race and gender attempt to do too (in case you want to quickly scan an example, here is an ethics paper of mine that just came out which openly resists this libertarian streak in reproductive ethics, in favour of structural concerns). In fact, I think it’s fair to say that since the work of philosophers like Arendt and Foucault, and the development of feminist theory, many philosophers do consider power and oppression in their academic work, but those subtleties are yet to be transmitted to those people within organisations and sectors who tend to respond only to PR pressure, and often think of ethics as nothing other than a practical box-ticking exercise.

My fundamental instinct is that one can have an ethical position AND that position can also not deliver an outcome of justice. If that’s the case, I feel that AI ethics simply isn’t sufficient for the scale and complexity of informing our work in AI (presuming our shared goal is to end structural harm — which I have to on some level presume is not everyone’s end game). What are your thoughts?

DR. SHAHVISI: Can you develop a position that is ethically sound, according to a particular ethical theory, yet oppressive? Yes, sadly you can. For example: utilitarianism is one school of thought within ethics which tells us that the right thing to do in a given situation is to maximise wellbeing for as many people as possible. Suppose you had a society in which a minority group had been treated very badly, and were now violently resisting, and seemed intent on harming majority groups. On certain readings, utilitarianism would suggest that it was ethically acceptable to kill all of them in order to protect the majority and keep as many people happy as possible. So that would be an ethically acceptable position, but a very oppressive one.

ME: So, what role can ethics play if any at all?!

DR. SHAHVISI: I might have painted a rather disparaging picture of my field, but it’s important to remember that ethics is being recuperated, especially as philosophy slowly becomes more diverse. Ethics can and should include considerations of aggregate human units, rather than just individuals. Injustices can and do occur between individuals, but they occur with much greater frequency and intensity between different groups of people (and sometimes those interactions are mediated by an individual encounter, but also often not), in accordance with robust, predictable trends, relating to distributions of social power. You are therefore perfectly justified in arguing in favour of a broader reading of ethics than the traditional atomistic one, in order to better capture the realities of people’s experiences.

End of exchange!

It’s worth noting that while there is much in the way of superficial writing on ethics in the mainstream technology press, there are some brilliant voices leading the way too, Kate Crawford among them. You may have noticed that Dr. Shahvisi and I consider a central concept in our understanding of inequality — and that is ‘power’ and its asymmetries. Kate Crawford argues this too. I leave you with a quote from her for good measure:

“Often when we talk about ethics, we forget to talk about power. People will often have the best of intentions. But we’re seeing a lack of thinking about how real power asymmetries are affecting different communities.”

So, let’s move from AI ethics to a movement and action for justice in AI. We then finally might get somewhere.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store