Critical Perspectives on Artificial Intelligence and Human Rights

Melanie Penagos
Data & Society: Points
3 min readJun 19, 2018
Image via Justin Lincoln

This is the fifth blogpost in a series on Artificial Intelligence and Human Rights.

Following Data & Society’s AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony.

I. The value of ethics and human rights in AI debates

In Marrying Ethics and Human Rights for AI Scrutiny, Bendert Zevenbergen (Princeton University) responds to a post by Christiaan van Veen and Corinne Cath, in which they advocate the value of applying a human rights framework in the development and deployment of AI. Both articles stemmed from workshop debates that considered the relevance of an ethical versus a human rights perspective in AI design and governance. Conversations centered around the complementary nature of human rights and ethics, but made clear their differences. For instance, human rights are grounded in international law and provide access to remedy, whereas ethics can inform laws but do not have enforcement mechanisms.

Zevenbergen states, “While indeed the values used in ethical reasoning and scrutiny may not have as precise meaning as provided in law, the multitude and flexibility of concepts allows the designers as well as the persons scrutinizing decisions to have a wider debate about the social impacts of technological choices than mere legal compliance checks as encouraged by traditional legal frameworks.” Zevenbergen agrees with van Veen and Cath’s arguments and points to the importance of internal ethical review procedures to promote technological development, describing how “ethics and human rights approaches can be mutually beneficial.”

II. Safeguarding against unintended harms of AI

Elizabeth Eagen (Open Society Foundations) reflects on the unintended harms of human bias in automated systems in her post, Some Thoughts on AI and Human Rights. Eagen notes, “At the conference we discussed how AI systems need to consider meaningful human intervention to avoid bias toward trusting the machine. But how could AI be used to make the human work more meaningful?”

Eagen poses a series of questions to ask of automated systems to prevent the further exacerbation of structural inequality. She describes the need to not only document the actual and potential harms of AI, but to also actively forge another path and create automated decision-making tools for problems that are most in need of it: “Big, hard problems like climate change, food distribution — areas where AI isn’t given over to control in an inexplicable way, but where it can support the processing and use of data, where its known biases can be accounted for, and where we might see a new way that AI can solve asymmetries without developing more of them.”

III. The limitations of AI technologies in economic development

Aubra Anthony (USAID) focuses on the use of AI technologies within development and humanitarian contexts in her article, Navigating the risks of artificial intelligence and machine learning in low-income countries. Anthony notes that while these technologies can offer many benefits to poor populations, they can also amplify discrimination and exclude minorities. She writes, “AI and ML [machine learning] have huge promise, but they also have limitations. By nature, they learn from and mimic the status quo — whether or not that status quo is fair or just.”

She advises that “we should approach these tools with caution. Otherwise, we risk these technologies harming local communities, instead of being engines of progress.” Based on ongoing research and interviews with aid groups and technology companies, Anthony proposes a set of five recommendations for the deployment of AI and machine learning tools for international aid. For example, she highlights the need for inclusion and urges developers to properly vet algorithmic programs by incorporating “human-in-the-loop” feedback processes to reduce potential harms. Through her recommendations, Anthony underscores the necessity to thoughtfully construct how these tools are built “so that fairness, transparency and a recognition of our own ignorance are part of our process from day one.”

For links to additional posts in the blog series see here.

Melanie Penagos is a research analyst at Data & Society and co-organized the AI & Human Rights Workshop.

--

--