Artificial Intelligence & Human Rights — April 2019

Imane Bello
5 min readMay 20, 2019

Below is a list of interesting articles on the intersections between Artificial Intelligence tools (Computer Vision, Machine Learning, Natural Language Processing, etc.) and International Human Rights that I put together.

Articles were mostly published in April 2019. Don’t hesitate to share and/or get in touch with me on Twitter @ImaneBello

Icon made by Geotatah from https://www.flaticon.com/authors/geotatah

Discriminating Systems: Gender, Race, and Power in AI

AI Now Institute, Discriminating Systems: Gender, Race, and Power in AI, April 2019

Following a year-long project, AI Now Institute published a study on the inter-connection of gender, race, and power in the field of AI. The study highlights a diversity crisis in the AI industry as well as concrete recommendations on how to address this issue. On Medium, the AI Now Institute further published a ‘reading list’ with the most relevant literature on the topic; a list that will be updated to add relevant work.

A topical example of AI systems discriminating on the basis of gender and race is the case of Facebook’s ad-distribution software that allows advertisers to target categories of people, thereby excluding people from seeing ads on housing for example. The Department of Housing and Urban Development (HUD) filed a lawsuit. In addition, a team of researchers published a report on Facebook’s use of online advertising and resulting biases.

  • Criminal Justice

Partnership on AI, Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System, April 2019

In an attempt to spark a debate, Partnership on AI (PAI) published a report assessing the use of risk assessment tools in the U.S. criminal justice system. The report discusses the limitations of such tools in the Unites States of America, most notably concerns about validity, accuracy and bias in the tools themselves; issues with the tools and humans interacting with these tools; and issues surrounding governance, transparency and accountability. PAI further outlined then requirements for jurisdictions to consider before using risk assessment tools in the criminal justice system.

  • Governance and Regulation of AI Systems

European Commission: Ethics Guidelines for Trustworthy Artificial Intelligence, April 2019

Following a first draft released in December 2018, the High-Level Expert Group on AI, convened by the European Commission, published its Ethics Guidelines for Trustworthy Artificial Intelligence. These guidelines provides seven concrete requirements that all AI systems used in the industry in Europe must respect. The seven requirements are human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.

Mark Zuckerberg calls for a more active role of governments and regulators, April 2019

In an op-ed published in the Washington Post and The Independent, Mark Zuckerberg calls on governments and regulators to take a more active role in countering threats and protecting freedom of expression. Zuckerberg notably mentions data portability, election integrity, harmful content, and privacy as areas that require new regulation. Andrew Liptak from The Verge argues that this signalizes a shift in Zuckerberg’s position and could lead to a more privacy-oriented Facebook.

  • Geopolitics

Ariel Conn, FLI Podcast: Why Ban Lethal Autonomous Weapons? Future of Life Institute, April 2019

Ariel Conn from the Future of Life Institute organised a Podcast with four key experts from different fields (physics, medicine, human rights) to discuss why lethal autonomous weapon systems should be banned. The Podcast refers to further literature on the topic.

Russia’s security chief calls for regulating use of new technologies in military sphere, April 24, 2019

According to the Russian News Agency TASS, the Russian Security Council Secretary Nikolai Patrushev called for international regulation of new technologies in the military domain. The Russian official issued this remark in the context of the Moscow Conference on International Security.

Paul Mozur, Jonah M. Kessel and Melissa Chan, Made in China, Exported to the World: The Surveillance State, April 24, 2019

In an article published by the New York Times, Paul Mozur, Jonah M. Kessel and Melissa Chan discuss Ecuador’s surveillance system, called ECU-911, its consequences and the links it provides its technological origin: China. The system was first installed in Ecuador in 2011 and was, according to the authors mostly made by state-controlled C.E.I.E.C. and Huawei.

While the footage of the surveillance system goes to the police, it also goes to the domestic intelligence agency that allegedly uses it to keep track of political activists and opponents.

While the article discussed Ecuador’s system of Chinese surveillance technology, it also points to the fact that a total of 18 countries use intelligent monitoring systems originating from China and that 36 received trainings in related areas.

The article warns of a Chinese expansion in the ‘surveillance market’ and that such technology could be used to reinforce authoritarian patterns, with the possibility of loss of privacy at a large scale.

  • International Human Rights

Lehigh University, How artificial intelligence can help in the fight against human trafficking, April 9, 2019

In an effort to combat human trafficking, experts from different fields (AI and other techniques, policy experts, law enforcement, survivors) established a group to jointly create AI systems (E.G. NLP; text, data and graph mining). These AI systems aim to more systematically process data and identify patterns of behaviour revealing activities related to human trafficking.

The Future of Human-Centered AI: Governance Innovation and Protection of Human Rights

In mid-April, a conference titled “The Future of Human-Centered AI: Governance Innovation and Protection of Human Rights” was organised at Stanford University. The event aimed to address (potential) harms and biases present in AI systems and how International Human Rights can be used as a framework to counter these.
The opening remarks of Eileen Donahoe, Executive Director of Stanford’s Global Digital Policy Incubator (GDPi), can be read and watched following the link.

  • Publication of AI research

Partnership on AI, When Is It Appropriate to Publish High-Stakes AI Research?, April 2019

In April, and following the debate following Open AI’s partial release of its language model (‘GPT-2’) (read more here), PAI staff members Claire Leibowicz, Steven Adler, and Peter Eckersley published an article on considerations around the disclosure of high-stakes AI research. The article provides several decision factors to weigh the risks of disclosing such information.

--

--

Imane Bello

Lecturer on Politics & Ethics of AI @SciencesPo Paris / Legal consultant in IT / Cybercriminality & AI / she/her -> Connect @ImaneBello