Artificial Intelligence: What’s Human Rights Got To Do With It?

Christiaan van Veen
Data & Society: Points
10 min readMay 14, 2018
Image: JP Rosa

This is the second blogpost in a series on Artificial Intelligence and Human Rights, co-authored by: Christiaan van Veen (Center for Human Rights and Global Justice at NYU Law) & Corinne Cath (Oxford Internet Institute and Alan Turing Institute).

Why are human rights relevant to the debate on Artificial Intelligence (AI)? That question was at the heart of a workshop at Data & Society on April 26 and 27 about ‘AI and Human Rights,’ organized by Dr. Mark Latonero. The timely workshop brought together participants from key tech companies, civil society organizations, academia, government, and international organizations at a time when human rights have been peripheral in discussions on the societal impacts of AI systems.

Many of those who are active in the field of AI may have doubts about the ‘added value’ of the human rights framework to their work or are uncertain how addressing the human rights implications of AI is any different from work already being done on ‘AI and ethics’. They may say that the potentially negative effects and misuse of AI may be dealt with effectively through domestic legislation, such as criminal law and safety regulations. Or that human rights norms are too vague to apply to the complex technical field of AI and that human rights courts and oversight bodies are too cumbersome and slow to deal with such matters, risking the stifling of innovation in a fast-changing field. What is more, while some in the AI community are actively engaged in addressing questions of fairness, accountability, and transparency (FAT), others may perceive broader societal questions about such issues as inequality, discrimination, and poverty as matters that states should deal with, acting as the primary bearer of human rights responsibilities.

But given the wide-ranging implications of AI on societies and individuals, it is certain that internationally protected human rights will be affected by developments in this field. The goal of this blog is not so much to dive into concrete examples of human rights harm, but rather to have a discussion about what ‘value’ human rights may ‘add’ to the AI debate. We will then briefly go into the peripheral role of human rights in the AI debate to date and indicate what the global human rights community, from UN bodies to domestic human rights institutions and human rights NGOs, can do to ensure that human rights become part and parcel of discussions on the future of AI.

Do Human Rights ‘Add Value’?

While there are certainly limitations one can think of to the application of the human rights framework to the debate on AI, the actual challenge for those working in the human rights field is to address the above concerns of the AI community and to fill current knowledge gaps about the concrete applicability of human rights to AI. When explaining the ‘added value’ of the human rights framework, it is too simple and formalistic to just respond to these concerns by stating that human rights are codified in international legal agreements, domestic constitutional and legislative provisions, and that, therefore, they apply. We need to move beyond such formalism.

As a starting point, it is important to underline that the development and use of AI is likely to severely ‘disrupt’ the distribution of power in the world.

It may create opportunities for new forms of oppression and disproportionally affect those who are the most powerless and vulnerable. Human rights exist exactly to address power differentials and to provide individuals, and the organizations that represent them, with the language and procedures to contest the actions of more powerful actors, such as states and corporations. Human rights, as a language and legal framework, is itself a source of power because human rights carry significant moral legitimacy and the reputational cost of being perceived as a human rights violator can be very high. They may, therefore, be an appropriate and useful tool to wield against those who cause harm when using the considerable power of AI systems, especially in the absence of appropriate domestic remedies.

The international human rights regime has led to the creation of a range of international, regional, and domestic institutions and organizations that can be seized by, often vulnerable, individuals and their representatives. They use the powerful language of human rights to put pressure on violators to conform to widely accepted standards. Today, there is a global network of United Nations human rights bodies, human rights NGOs, human rights defenders, courts and national human rights institutions that provide spaces in which human rights disputes caused by the development and use of AI systems can be aired and addressed constructively, ensuring violators are held to account. These human rights bodies, procedures, and institutions are more responsive than is often believed. They are in many cases able to respond to human rights issues at relatively short notice. While there is a need for additional domestic legislative protection and ethical guidelines, the pressure that can be brought to bear by human rights activists and organizations is significant and unique. It can be an additional safeguard against the potentially negative ramifications of the design and use of AI systems.

Even though some claim that human rights are ill-defined, they have a more clearly defined meaning than often invoked ethical principles. Human rights are legal norms that have been further explained and specified in the jurisprudence of a range of well-respected courts and other human rights bodies. Human rights are also a global ‘vernacular’, used widely by NGOs, human rights defenders, and other activists around the world and given meaning in the process. What is more, the human rights framework has a universal scope and is therefore uniquely suited to address challenges of a global, cross-border, nature, such as AI. Domestic standards lack that characteristic and ethical standards, which are not embedded in a universal legal framework, are more vulnerable than rights to the claim of cultural relativism.

Where are the Human Rights?

Despite the promise of human rights, one would be hard-pressed to find many references to them in the current debate on AI, and such references generally tend to be tokenistic. Instead of human rights being central to the debate, many of the broader questions about the implications of AI on our societies, individuals, and the values we hold dear, have taken place under the heading of ‘AI and ethics’. Nothing written here is meant to dismiss the relevance of exploring the ethical implications of the use of AI. The human rights community certainly has much to learn from the debate on ‘AI and ethics’. But there are also limitations to the ‘ethics paradigm’.

Corporations are dominant players in the AI debate, and they have taken it upon themselves to set out their visions and strategies for the future of AI. IBM, for example, recently reiterated its “commitment to the ethical, responsible advancement of AI.” A more comprehensive vision comes from Microsoft, which in early 2018 released its take on the future of AI, The Future Computed. That strategy is emblematic of the focus on ethics instead of human rights, with Microsoft promising to be guided by such ethical principles like fairness, reliability & safety, privacy & security, inclusiveness, transparency and accountability in its work on AI. Many of these strategies fail to refer to human rights at all, including a lack of any reference to the UN Guiding Principles on Business and Human Rights, commonly known as the ‘Ruggie Principles’, which set out the human rights responsibilities of corporations.

The problem with this ethics paradigm in corporate strategies is that ethical values such as fairness or inclusiveness have no widely agreed-upon meaning. The inherent nebulousness of such ethical principles makes them rather unhelpful to ensure ‘good’ corporate behavior. When is certain behavior ‘unfair’? And, more importantly, to whom are corporations accountable in case of ‘violations’ of these ethical principles (if these principles can be ‘violated’ at all)?

In other words, who will determine when an ‘ethics violation’ has taken place and what the consequences should be?

Microsoft, for instance, proposes an internal Ethics Committee composed of senior company officials to monitor the realization of ethical principles. But surely that is a rather weak alternative to scrutiny by public and external accountability mechanisms of the international human rights regime, which over decades have developed an extensive jurisprudence of what is and what is not a human rights violation and what adequate remedies may look like.

It would, however, be unfair to simply expect individual companies to lead the way here. Especially, since legislators and governments have done little thus far in terms of spelling out the human rights dimensions of AI. A 2017 study by researchers at the Oxford Internet Institute of recent AI strategies by the White House, the European Parliament, and the UK House of Commons concludes that these plans are devoid of references to the human rights framework. Instead, the societal implications of AI in relation to transparency, bias, privacy and accountability are primarily considered to be ‘ethical issues’. Similarly, in a press release of last month, the European Commission outlined a European approach to AI that will be guided by ‘ethical guidelines’, which are currently under development. While those ethical guidelines themselves are said to be “based on the EU’s Charter of Fundamental Rights”, there is no indication what that would mean in practice, and healthy skepticism is in order when ethical guidelines are the product and human rights merely the inspiration.

A Future for Human Rights & AI?

This blog post is not the appropriate place for a lengthy analysis of the future of Human Rights & AI. Instead, we propose to make a few brief remarks about what the global human rights community, from UN bodies to domestic human rights institutions and NGOs, can do to ensure that human rights become part and parcel of discussions on the future of AI.

Only if human rights experts and practitioners step up their game in explaining the added value of human rights to AI, and make their relevance more concrete through activism, case-law and political action, can the international human rights framework have any impact on the development of a field that may radically reshape our world.

First of all, there is much more that United Nations human rights bodies could do to address the human rights challenges posed by AI and related new technologies. The High Commissioner for Human Rights, the ‘Special Procedures,’ the human rights treaty bodies, the Human Rights Council and the Universal Periodic Review are UN institutions, bodies, and procedures that have the power to investigate, monitor, and address the human rights issues that result from developments in AI. But with some exceptions, including reports by UN Special Rapporteurs on the use of ‘killer robots,’ the impact of the use of ‘care robots’ on the rights of older persons, and the impact of AI and related new technologies on the poor in the United States, as well as a report by the High Commissioner on the gender digital divide and artificial intelligence, very little sustained and substantive attention has been paid to these issues by UN human rights bodies to date. In the absence of more attention at the UN level, the charge that the human rights regime is not providing much clarity and guidance to the AI debate is a valid one.

Second, there is a need for the human rights community beyond the UN, including human rights NGOs, human rights defenders, national human rights organizations and others, to litigate, protest, advocate, research, and write about the human rights implications of AI. These implications will not be sufficiently discussed at the UN or other international fora unless they are brought to their attention with great energy by activists. Leading international human rights NGOs, such as Human Rights Watch and Amnesty International, are expected to play a leading role in that regard. Amnesty International announced an AI and human rights initiative last year. Human Rights Watch is already doing important work on the human rights impact of ‘killer robots.’ It is important, despite inevitable resource constraints and competing priorities, to further ensure a steady stream of reports, campaigns and, eventually, case law, to raise awareness of the human rights implications of AI systems, and to antagonize powerful actors. And the two NGOs mentioned above are certainly not the only two that matter: a whole range of groups, from labor unions to smaller and domestic human rights organizations, can play an important role.

Through increased involvement, the human rights community can play a key role in enriching the current debate on the societal impacts of AI. For one, there is a tendency to focus on corporations developing and using AI, but human rights organizations are well-placed to intensify the investigation of the role of states and underline their human rights obligations. To name just one example: China, a country with less than a spotless human rights record, is vowing to become the world’s leader in AI and has touted AI’s ability to “significantly elevate the capability and level of social governance,” a reason for concern, to say the least.

There is also a real need to move beyond the narrow set of human rights concerns that have dominated the wider discussion about digital technology and human rights since the Snowden revelations. There is more at stake in the debate on AI than privacy and freedom of expression alone (however relevant these rights are in this context), including the impact of AI on the rights of women, children and persons with disabilities, as well as the implications for such social rights like the right to education and the right to health, and economic rights like the human right to work (not to be confused with the term ‘right to work’ used by some actors in the United States) and rights at work. Relatedly, we should not just focus on what AI systems and other new technologies mean for middle-class individuals in Europe and the United States, but also get serious about its implications for basically everyone else, especially the most marginalized in every country.

In times in which the human rights regime is already under significant strain from populist and authoritarian forces, the global human rights community of UN bodies, domestic human rights institutions, NGOs, and activists, is well-advised to put the challenge of AI to human rights higher on the agenda. The penalty for the community’s absence or tardiness in this debate could be that the global human rights framework, built over many years of struggle, will increasingly be perceived as irrelevant in a future more and more defined by AI.

--

--

Christiaan van Veen
Data & Society: Points

International human rights lawyer. Senior advisor to the UN Special Rapporteur on extreme poverty and human rights & based at NYU Law.