Artificial intelligence and the future of human rights

Sherif Elsayed-Ali
Amnesty Insights
Published in
3 min readOct 19, 2017
One of Amnesty’s annual letter writing marathon events © Marieke Wijntjes / Amnesty International

Amnesty International started its first structured programme of work on technology and human rights about three years ago, largely in response to Edward Snowden’s revelations about USA and UK mass surveillance programmes.

We knew that, as the world’s largest human rights organization, we had to have a proactive and deliberate approach to the impact that new technologies were having on human rights. If the freedoms we had come to expect and cherish were to thrive, not only on the internet but within all the other technologies that are driving the fourth industrial revolution, then civil society would need to rise up to the task.

At the same time as Amnesty was starting its technology and human rights work, the field of artificial intelligence (AI) was experiencing a fantastic transformation. The combination of big data, faster processing power and cloud computing had supercharged machine learning techniques that had been around for decades.

In the space of a few years AI became, arguably, the most influential technology on consumer products and services, and increasingly, on public services. It was clear that AI would have a huge social impact with many benefits but also the potential for significant drawbacks.

At Amnesty, we realized we had to be proactive in understanding AI, including its positive and negative impacts, and in promoting the respect of human rights in the development and use of AI.

In June this year, our Secretary General, Salil Shetty, announced Amnesty’s AI and human rights initiative, which brings together our research, campaigning and innovation work with the aim of maximizing the beneficial use of AI for the protection and promotion of human rights and minimizing the risks to human rights from intended and unintended development and uses of AI.

Amnesty’s AI and human rights initiative is in its early days but we have clearly set and articulated priorities:

1. In terms of the beneficial uses of AI for human rights, we are piloting the use of machine learning in our human rights investigations and have exciting plans to expand the use of AI within our research and campaigning, in close collaboration with technical experts and partners. We are strongly supportive of the AI for Good movement.

2. In terms of the existing and potential negative impacts of AI on human rights, we base our work on well-grounded research and investigations, combined with constructive engagement with companies and policy makers. Our areas of focus are accountability, transparency and access to remedies in the following contexts:

o The potential for discrimination within the use of machine learning, particularly as it relates to policing, the criminal justice system and access to essential economic and social services;

o The potential development of autonomous weapons systems;

o The impact of automation on society including the right to work and livelihood;

o The impact of AI on privacy and trust in information.

We are also interested in issues around the long-term safety of AI, including potential existential risks, even if these are relatively remote possibilities. We believe that, collectively, today’s governments, companies and civil society, owe it to future generations to carefully consider critical risks to humanity’s survival, however remote, and to take active and deliberate steps to prevent the worst potential outcomes.

A key factor for the success of Amnesty’s work on AI and human rights is constructive engagement with the diverse group of actors involved in developing AI and driving the development of ethical standards for its use.

This is why we are joining the Partnership on Artificial Intelligence to benefit people and society. The organization brings together some of the world’s top AI scientists, the tech industry, academia and civil society. The Partnership on AI aims to address issues of fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, as well as foster aspirational efforts in AI for socially beneficial purposes.

Amnesty is joining the Partnership on AI because it is a key global platform for advancing beneficial uses of AI and developing the ethical parameters of its use. We believe that with a technology as powerful and complex as AI, constructive dialogue and engagement between academia, business and civil society at this relatively early stage is critical to maximizing the benefits and minimizing the risks to human rights, now and in the future.

--

--

Sherif Elsayed-Ali
Amnesty Insights

Born in Paris, grew up in Cairo and live in London. Dad of two. Liberal, occasional tree hugger. I work on tech for the climate and human rights at Element AI.