Artificial Intelligence & Human Rights — Feb. 2019

Imane Bello
5 min readMar 10, 2019

--

Below is a list of interesting articles on the intersections between Artificial Intelligence tools (Computer Vision, Machine Learning, Natural Language Processing, etc.) and International Human Rights that I put together.

Articles were mostly published in February 2019. Don’t hesitate to share and/or get in touch with me on Twitter @ImaneBello

Icon made by Freepik, https://www.freepik.com/ from https://www.flaticon.com/

Overview of Feb. 2019

  • Predictive Policing

Rashida Richardson, Jason Schultz and Kate Crawford, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, Feb. 13, 2019

Rashida Richardson, Jason Schultz and Kate Crawford elaborate on the implications of using “dirty data” for predictive policing systems. ‘Dirty data’ describes data derived from unlawful, racially biased or otherwise flawed practices that, in turn, feeds into predictive policing. The research focuses on 13 case studies and calls for public scrutiny of such systems.

Caroline Haskins, Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed, Feb. 14, 2019

Caroline Haskins discusses “PredPol”, a predictive policing software used by several police departments. PredPol is supposedly based on seismographic models with aftershocks that, according to experts quoted in the article, oversimplifies the model and thereby introduces a harmful bias leading to potentially flawed police practices. For example, the model predicts crime locations by supposedly relying on areas with previously high crime rates. The article points to a self-reinforcing feedback loop, which constitutes one of the harms presented by PredPol.

  • Human Trafficking & Genocide

Abby Stylianou, Hong Xuan, Maya Shende, Jonathan Brandt, Richard Souvenir and Robert Pless, Hotels-50k: A Global Hotel Recognition Dataset, Jan. 26, 2019

Abby Stylianou, Hong Xuan, Maya Shende, Jonathan Brandt, Richard Souvenir and Robert Pless, authors of the paper “Hotels-50k: A Global Hotel Recognition Dataset” created a dataset for the recognition of over 1 million rooms from 50,000 hotels in an effort to combat human trafficking. AI systems (Computer Vision and Pattern Recognition) could therefore support human trafficking investigations, notably by identifying the location of victims and narrow down future locations.

Susan Bell, Spatial scientists use satellite technology to detect and — eventually — prevent genocide, Feb. 4, 2019

The article’s author, Susan Bell, presents Andrew Marx’s work on small satellite technology (‘smallsat’) that aims to detect human rights abuses and violations. Marx manages the Human Security and Geospatial Intelligence Lab at USC Dornsife’s Spatial Sciences Institute (SSI).

Possible utilisations of data provided by smallsat technology include verification of accounts of abuses in international courts or the creation of an early warning system. Currently, Marx is monitoring the situation in Myanmar in cooperation with Human Rights Watch and Physicians for Human Rights.

  • Language Models

Open AI, Better Language Models and Their Implications, Feb. 14, 2019

OpenAI has trained a large-scale unsupervised language model (‘GPT-2’), which notably generates coherent paragraphs of text. The model also excels on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering, and summarization — using no task-specific training data.

Due to concerns about malicious applications of the technology, e.g the potential to generate deceptive, biased, or abusive language at scale, OpenAI has not released the trained, but a much smaller model for researchers to experiment with, as well as a technical paper.

OpenAI’s release strategy is also an experiment in itself, and raises the question of publication norms or policy frameworks set by governments for systems that have important social and political impacts.

  • Regulations

Dina Bass, Microsoft backs facial recognition bill as Amazon mulls support, Feb. 7, 2019

State Senator Reuven Carlyle (D) sponsored a consumer-privacy bill (Washington Privacy Act) that includes regulations for facial-recognition software. Among others, the bill requires:

  • third-party testing of software in order to control for biases and privacy issues, plain English language explanation of what programs do
  • notifications when users are being analyzed by the software
  • meaningful human review in any final decisions
  • a court order for ongoing surveillance by the software

According to the article from 7 February 2019, Microsoft supported the bill while Amazon asked for clarifications and changes, notably concerning the third-party testing.

NB: The Washington Senate approved the consumer-privacy bill on March 6 2019, which applies to companies that are based in the state as well as to companies that conduct business there, companies that process data of 100,000 consumers (or more) or that receive 50% of their revenue from selling personal data. The House of Representatives will now consider the bill.

Afef Abrougui, EU proposals pushes tech companies to tackle ‘terrorist content’ with AI, despite implications for war crimes evidence, Feb. 27, 2019

In this article, Afef Abrougui elaborates on a draft regulation by the European Commission that could make the use of AI solutions mandatory for companies in an effort to combat terrorist content. The article warns of the implications that such a regulation could have for evidence of war crimes and human rights abuses posted online.

Abrougui refers to past examples where internet platforms used AI systems to censor violent and terrorist content but effectively also censured content documenting human rights abuses.

  • World Food Programme Deal with data mining firm Palantir

Ben Parker, New UN deal with data mining firm Palantir raises protection concerns, Feb. 5, 2019

In this article, Ben Parker discusses the recent USD 45 million agreement between Palantir and the World Food Programme (WFP), as well as the concerns raised. This 5-year agreement aims to save costs and increase higher efficiency of WFP’s programmes by using Palantir’s technology to analyze WFP’s data and detect misuse and/or management.

Critiques raise questions on unintended risks, privacy issues and limited data protection frameworks within the UN and the humanitarian community.

Joanna van der Merwe, Josje Spierings, Ziad Achkar, with contributions from Melissa Amorós Lark, What would it take for a company like Palantir to become an acceptable ally?, Feb. 2019

The authors Joanna van der Merwe, Josje Spierings, Ziad Achkar, and Melissa Amorós Lark use the recent deal between Palantir and WFP to refer to a more structural issue: lack of capacity when it comes to data management and analytic expertise in the humanitarian field. While supporting requests for WFP to release further information on the deal with Palantir, the authors call for a more generalized discussion about partnerships between the humanitarian and private sectors, specifically regarding data management.

--

--

Imane Bello

Lecturer on Politics & Ethics of AI @SciencesPo Paris / Legal consultant in IT / Cybercriminality & AI / she/her -> Connect @ImaneBello