Below is a list of interesting articles on the intersections between Artificial Intelligence tools (Computer Vision, Machine Learning, Natural Language Processing, etc.) and International Human Rights that I put together.
Articles were mostly published in January 2019. Don’t hesitate to share and/or get in touch with me on Twitter @ImaneBello
Overview of Jan. 2019
- Repression & Arms control
Steven Feldstein, How Artificial Intelligence Is Reshaping Repression, Jan. 9, 2019
In this paper, Steven Feldstein elaborates on the use of AI systems by repressive regimes and highlights the particular benefits that AI systems represent for authoritarian regimes. AI systems could therefore enable a digital repression capability at a lower cost. The author identifies three repressive scenarios under which AI systems would be beneficial to authoritarian regimes, and concludes by laying out key policy challenges and responses.
Eric Germain, Is arms control over emerging technologies just a peacetime luxury? Lessons learned from the First World War, Jan. 18, 2019
In an article published on ICRC’s blog “Humanitarian Law and Policy”, Eric Germain ponders on lessons learned from the First World War that could be applied to modern times, in particular to new and emerging technologies. Germain challenges the conclusion that war leads to moral relativism by which treaties or laws will not be upheld. The latter idea is put forward by the author Peter W. Singer. Germain, on the other hand, notes that international treaties continue to matter to concerned parties in conflict and that the international community continues to monitor violations of previously concluded agreements. In this regard, Germain welcomes deliberations over emerging technologies such as ‘killer robots’ and calls for debates including as many stakeholders as possible, including the public.
Rachel Thomas, The tech industry is failing people with disabilities and chronic illnesses, Jan. 15, 2019
In an attempt to make the tech industry more inclusive, Rachel Thomas, fast.ai co-founder and professor at the USF Data Institute, shares experiences and advice in order to increase empathy and understanding towards people with disabilities and chronic illnesses.
Dhruv Khullar, AI Could Worsen Health Disparities, Jan. 31, 2019
Dhruv Khullar recalls that questions on AI applications in healthcare traditionally focus on its technological aspects. However, the author ponders on whether AI can worsen healthcare disparities in a system, which he describes as already treating patients unequally. Without doubting the potential of AI applications, the author fears that AI could reinforce existing biases by rendering them invisible, and therefore legitimate.
- Education & Democracy
Janosch Delcker, Finland’s grand AI experiment, Feb. 1, 2019
In order to repurpose the country’s economy toward high-end applications of artificial intelligence and in an effort of democratization of AI, Finland proposes open access education covering the basics of AI. The grand experiment, which constitutes in an online free course, takes place within Finland’s AI strategy to become a world’s leader in practical applications of AI systems.
Algorithm Watch, Automating Society — Taking Stock of Automated Decision-Making in the EU, Jan. 29, 2019
AlgorithmWatch, in cooperation with Bertelsmann Stiftung and the Open Society Foundations, authored a report on automated decision-making (ADM) in the EU. The report analyses the current stadium of ADM both at EU and 12 individual EU member states level. The objective of the report is to inform the debate about where ADM is being used, what the issues at stake are, as well as what potential ways forward exist.
AlgorithmWatch further provides a link to watch the launch event of the report and discussion in the European Parliament.
- Risk assessment
John Logan Koepke & David G. Robinson, Danger ahead: risk assessment and the future of bail reform, last revised on Dec. 31, 2018
John Logan Koepke and David G. Robinson assert that current pretrial risk assessment instruments do not achieve the sought after objectives of decreasing incarceration and facing racial and poverty-based inequities. The article presents guidelines for cases where pretrial risk assessments are being conducted as well as explains why the current instruments are ill-suited to achieve said objectives. The most compelling arguments are that data for predictive models is not adequate, that these models still include moral judgements that have not been scrutinized and that the results might provide decision makers with a claimed, yet false scientific objectivity.
- Face recognition technologies
Makena Kelly, Google, Amazon, and Microsoft face new pressure over facial recognition contracts, Jan. 15, 2019
Over 85 groups, including the American Civil Liberties Union (ACLU), the Electronic Frontier Foundation (EFF), Access Now and Human Rights Watch have urged Google, Amazon and Microsoft to commit not to provide face surveillance technologies to government entities. In their letters, the advocacy groups highlight the power that such partnerships could give government entities to target and single out immigrants, religious minorities and people of color.
- Privacy & Data Protection
Sandra Wachter, Data protection in the age of big data, Jan. 16, 2019
Sandra Wachter warns against pervasive inferential analytics, i.e. privacy-pervasive data collections that allow sensitive inferences to be drawn, such as levels of stress, mental illness or demographics. While raising awareness of potential discrimination caused by such inferences, the author explains why the framework given by the European General Data Protection Regulation (GDPR) does not address pervasive inferential analytics.
Jeff John Roberts, Fake Porn Videos Are Terrorizing Women. Do We Need a Law to Stop Them?, Jan. 15, 2019
In this article, the author takes a look at the current legal remedies (US) available to victims of deepfake pornography. The author further addresses the potential consequences of a reform.
Leslie Scism, New York Insurers Can Evaluate Your Social Media Use — If They Can Prove Why It’s Needed, Jan. 30, 2019
In an effort to regulate insurance underwriting, New York’s top financial regulator issued specific guidance under which conditions insurers may use algorithms to analyze personal data, including social media and other sources, when setting premium rates. Life insurers will need to provide evidence through statistical and actuarial analysis that algorithms and data are not biased against racial minorities and other groups protected by law.
The regulator’s objective is to regulate an emerging practice among insurers before it becomes common practice. The National Association of Insurance Commissioners is also working on standard setting for insurance underwriting, most notably through its “Big Data Working Group”.