Some Thoughts on AI and Human Rights

Elizabeth Eagen
The Tilt
Published in
3 min readMay 2, 2018
Image by Gerd Leonhard

(article has been edited and updated August 20, 2020)

Last week I attended a workshop on AI and Human Rights at Data & Society Research Institute. We spent a lot of time talking about harms, the documenting of which is now an Olympic sport (for a good primer and links to top thinkers, try AI is a mirror of IRL inequalities). Though she wasn’t there, Virginia Eubanks’ devastating critique pretty much sums up the dilemma of AI for public policy for me:

The new tools of poverty management hide economic inequality from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices about who gets food and who starves, who has housing and who remains homeless, whose family stays together and whose is broken up by the state. This is part of a long American tradition. We manage the poor so that we do not have to eradicate poverty.

Further on in the same article, she puts forward a call to action:

Automated tools for classifying the poor, left on their own, will produce towering inequalities unless we make an explicit commitment to forge another path. And yet we act as if justice will take care of itself.

The harms continue to be documented. AI isn’t going to go away, even as efforts to make it ethical or regulate it might continue. I’d like all those things to happen. But in this piece, I want to represent the practitioner side of my brain — I want to talk about forging another path. What would it look like if the brains using AI to help move human rights forward were trained on a thing AI was actually good at?

Fundamentally, we’d need the designers of the AI to look deep into bias, and deep into their decisions on how to make a thing and make it work. Eubanks once again: “When automated decision-making tools are not built to explicitly dismantle structural inequalities, their increased speed and vast scale intensify them dramatically.”

Here are some questions I think it’s important to ask about automating information and insights to contribute meaningfully to a messy, complicated public policy question without exacerbating inequality:

  1. What pieces of the bargains, values, and tradeoffs that formerly held a policy in place does my AI upset or overturn? A good example here is the Theory of Optimal Law Enforcement, now overturned by the “perfect enforcement” capacities of AI.
  2. Does my solution require that the most vulnerable shoulder a broad, complicated load to solve the problem they did not generate?
  3. Am I aiming at a problem that has data already? Is that data something that AI can help by organizing? Or does my solution require the generation of tons of data to accomplish, thus introducing further bias for little gain?
  4. Am I violating the principle of nothing-about-us-without-us? In other words, have I checked in to the communities I plan to serve, and am I supporting their leadership?

Instead of what human rights are crucial to AI, we asked, what AI systems are critical for human rights?

I wonder if there is some way to generate energy around problems that AI is well-suited to contribute to. I think these might be problems with existing data sets that need organization, automation, exploration, and insight generation. At the conference we discussed how AI systems need to consider meaningful human intervention to avoid bias toward trusting the machine. But how could AI be used to make the human work more meaningful? The AI task could be to take on problems where there’s a massive data set that needs to be made accessible, as that task is now often being done by hand, or by a combination of machine learning and editorial and journalistic mastery. Work like Julia Angwin’s research requires it. Human rights organizations’ research tools like Who Was In Command could benefit from it.

These might not be traditional civil and political liberties, but they definitely relate to those concerns. Big, hard problems like climate change, food distribution — areas where AI isn’t given over to control in an inexplicable way, but where it can support the processing and use of data, where its known biases can be accounted for, and where we might see a new way that AI can solve asymmetries without developing more of them.

--

--