Artificial Intelligence & Human Rights: A Workshop at Data & Society

Mark Latonero
Data & Society: Points
11 min readMay 11, 2018
Photo credit: Elisabeth Smolarz

The first blogpost in a series on Artificial Intelligence and Human Rights; it summarizes a multidisciplinary workshop held at Data & Society on April 26 and 27, 2018. It was co-authored by Mark Latonero, PhD, Data & Society Research Lead and Melanie Penagos, Data & Society Research Analyst. Find links to additional posts in the series below.

Multiple sectors of our global society are grappling to make sense of how AI may transform or alter the way we live, work, and relate to one another and our institutions. At the same time, “Artificial Intelligence” is a slippery and highly contextual concept — the way a mathematician defines AI can diverge significantly from a marketing executive or a causal reader of science fiction. This tension makes discussions about norms that could shape or regulate AI systems a thoroughly contested and challenging space. It is against this backdrop that we convened a multidisciplinary Workshop on Artificial Intelligence and Human Rights at the Data & Society Research Institute in New York.

We invited a group of experts and leaders from civil society, business, academia, international organizations, and government to engage in dynamic discussions around a central theme:

Can the international human rights framework effectively inform, shape, and govern AI research, development, and deployment?

It has been 70 years since the Universal Declaration of Human Rights enshrined fundamental rights such as privacy, work, freedom of expression, assembly and association, education, security, movement, and non-discrimination — all of which are remarkably relevant to current debates about AI ethics and impact. While human rights laws have been adopted (to varying degrees) by every nation on earth, the mechanisms to hold stakeholders accountable for upholding these rights is less clear when it comes to rapidly evolving and emerging technology.

For instance, beyond the rights to privacy and freedom of expression, fundamental rights like the right to dignified work are absent from the debate about AI and the future of labor. Even with UN Guiding Principles on Business and Human Rights, questions persist as to how the tech industry should anticipate and remedy the human rights impact of their products and platforms. And researchers in academia rarely assess if their AI related designs may have a negative impact from the perspective of rights holders.

At the same time, the IEEE has put forward that the first principle for ethically aligned design for autonomous intelligent systems is a respect for universal human rights. And according to Gasser and Almeida, any emerging model for AI governance “must be situated in and interact with existing institutional frameworks of applicable laws and policies, particularly human rights, as the development and deployment of AI does not take place in a vacuum.”

The goal of the workshop was to consider the value of human rights in the AI space, foster engagement and collaboration across sectors, and develop ideas and outcomes to benefit stakeholders working on this issue moving forward. Some of the key topics that the group examined include the challenges of translation between human rights and technical communities, the nature of human rights in relation to ethics, the potential avenues of engagement by sector, the complexities of legal remedy and redress, and the disproportionate impact of AI on vulnerable populations.

Workshop summary

This summary reflects a selection and compilation of the ideas exchanged during the event. The workshop was held under the Chatham House Rule to promote discussion, debate, and dialogue. Attendees came from various sectors including, Global Affairs Canada, USAID, New York City government, The Lisbon Council, OECD, United Nations Office of the High Commissioner for Human Rights, Accenture, Microsoft, DeepMind, Google, Facebook, Gensler Research Institute, Carnegie Mellon University, Cornell University, Oxford Internet Institute, Princeton University, New York University, University of California Berkeley, Max Planck Institute, Data & Society, Digital Asia Hub, Global Network Initiative, Business for Social Responsibility, World Economic Forum, Human Rights Watch, Privacy International, Article 19, AccessNow, Amnesty International, IEEE, Open Society Foundations, Ford Foundation, and The Rockefeller Foundation. Many individuals spoke in their personal capacity and their attendance does not necessarily reflect the official policy or position of their agency, organization, or company.

Participants at the AI and Human Rights Workshop. Photo credit: Elisabeth Smolarz

To better understand the relevance of human rights in the debate about AI, we posed a series of discussion questions to frame the convening:

  • Is a human rights-based framework applicable to individuals developing and deploying AI, such as engineers working in big tech, developers at startups, researchers in academic labs, or technologists in government? Is “human dignity” sufficiently legible for designers/technologists?
  • Do we have the adequate vocabulary necessary to describe “AI” in relation to human rights? What translation work is necessary between those working on either AI or human rights in order to have a meaningful discussion? How do we sift through the hype, hopes, and fears of AI?
  • How would human rights norms be implemented in practice and policy for the private sector? Would existing normative aspirations like the UN Guiding Principles on Business and Human Rights operationalize the protect, respect, and remedy framework for the AI/tech industry? Would a human rights impact assessment for AI be an effective tool?
  • What are the legal mechanisms for redress and remedy for human rights violations resulting from AI? Where would accountability and responsibility lie for an autonomous system or machine learning algorithm? Can international human rights law influence AI regulation or legislation by national or local governments? Are particular human rights more at risk or a priority?
  • The world’s marginalized and vulnerable are at the greatest risk from harms such as algorithmic discrimination or biases in machine learning. Can a focus on human rights become a way to meaningfully include their voices and perspectives in AI debates? What are reasonable strategies for the inclusion of diverse stakeholders?

Throughout the workshop, participants remarked on the universality of human rights, which could be used to address the global and cross-border nature of AI.

The human rights framework was described as an aspirational roadmap and moral compass for actors in the AI space.

Given the origins of human rights in protecting individuals against authoritarianism, it could also be leveraged to address the power asymmetries that may result from AI development. Discussions underscored the relevance of a broad range of human rights beyond the rights to privacy and freedom of expression. Often other rights, such as the rights of children, women, and people with disabilities are neglected in AI discussions.

In debating the value of ethics and human rights, participants noted that although they are complementary, the added benefit of human rights is that they are enshrined in law and arguably should not be derogated. Furthermore, human rights address power differentials and its language and legal framework carry moral legitimacy and a high cost for human rights violators. There were varying perspectives on translating across fields: some saw the academic field of AI as a wide open space of sharing and collaboration, while others, particularly civil society groups, viewed it as opaque and difficult to navigate. Relatedly, some AI systems researchers found human rights law and its reporting mechanisms equally opaque.

Conversations centered around the notion that AI is not “magic” — while “AI” may be a convenient shorthand, its uncritical usage belies that AI technologies are intrinsically linked to social systems and human actors. Claims that AI, or any technology, can be deployed to “solve” complex social or human rights problems should remain highly suspect. For example, the group focused on a case study on the role that social media has played in spreading hate speech and disinformation in Myanmar. The discussion centered around recent reports from the UN Fact-Finding Mission on Myanmar that Facebook, in particular, was used to incite violence against the Rohingya population. Discussants critiqued claims that AI would solve the proliferation of extreme and violent content on platforms, citing AI’s limitations to understand the context of speech and the need for a “human in the loop” to comprehend the problem.

In discussing the increasing ubiquity and acceleration of AI development, it remains a challenge for civil society organizations to keep up with the potential impact on human rights. There were related questions about how to increase the capacity of CSOs to effectively engage in AI debates and bring human rights to the center of these conversations. The group identified the need to integrate human rights practitioners and advocates into a broader range of standards setting groups and similar AI technical spaces. Likewise, recommendations were made to offer human rights training and curricula to AI designers and developers.

Participants at the AI & Human Rights Workshop. Photo credit: Elisabeth Smolarz

From a legal perspective, it was agreed that human rights law provide a right to remedy. But in an AI context, remedy can be difficult to realize. This discussion led to a series of questions, such as, how do you remediate for an individual for group harms? How do you define the harms caused by automated systems and identify who is responsible? Are there lessons we can gain from criminal law or product liability? How do you respond to the violation if the use of AI is in the context of war? How can you remediate what you cannot readily see or know? Stemming from this, there were calls to identify methods to systematically track harms to make them more visible to AI designers and other stakeholders. There was also an appeal to use the wide range of human rights to promote a holistic approach to the regulation of the variety of AI technologies and systems.

For technologists, it is imperative to continue to promote human rights in the ethically aligned design for autonomous intelligent systems. In addition, it is vital to think about who is being excluded from AI systems and what is missing from datasets that drive machine learning algorithms. Often, these blind spots tend to produce disparate impacts to vulnerable and marginalized groups. This leads to invisibility of these communities and their needs because there are not enough feedback loops for individuals to give their input. While the collection of even more personal data might make algorithmic models better, it would increase the threats to privacy.

The poorest populations are often most likely to be impacted by these technological developments and unethical experiments that are carried out on the weakest are often transposed to the rest of society.

Opportunities were identified for the human rights community to highlight these populations and their contexts from a business and human rights perspective, especially as sensitive data issues may arise, along with increased risks of surveillance.

Academic researchers can be seen as on the front lines in examining the unforeseen consequences of technology. In particular, the Fairness, Accountability, and Transparency community has played a leading role. This awareness is critical as some academics have been implicated in conducting questionable experiments — the workshop used a case study of the dubious Stanford experiment that claimed a neural network could detect “gay faces.” The researchers’ dataset was rife with missing data, which was but one issue that suggests a pervasive data ethics gap currently exists.

Businesses are key stakeholders in AI development and many conversations returned to the need for the private sector to take a leadership role in assessing and remedying the potential human rights impacts of their products. Participants discussed the linked and distributed social responsibilities around rights that should be activated during product development and deployment. They cited further complexities for business when the range of services, scope, and unpredictability of AI is considered. It is clear that the UN Guiding Principles on Business and Human Rights are a critical entry point to help businesses understand their responsibility to respect human rights. However, strategies are needed for businesses to move from the aspirational principles to operational mechanisms for remedy. Furthermore, the recent discussions about Algorithmic Impact Assessments could benefit from two decades worth of business’ engagement with Human Rights Impact Assessments.

Governments also play a vital role in AI research, strategy, and regulation. The notion that the nation that leads the way on AI will be the “ruler the world” was critiqued, in addition to discussing the impact of a potential AI “arms race” among states. It was noted that the government of China has vowed to be the global AI leader by 2030, the EU has funneled billions of dollars into AI, and the U.S., France, UK, and Japan all have AI strategies. Yet the role of human rights are missing from these national strategies. Stakeholders should advocate for governments to uphold human dignity and international human rights law in policy discussions. New York City’s new law to make algorithms accountable was noted as a potential entry point for government regulation.

Data & Society’s Data & Human Rights Research Lead Mark Latonero addresses the Workshop. Photo credit: Elisabeth Smolarz

Next steps for an effective human rights framing of AI

It is important to note that convening this small workshop in New York meant that many important voices were missing from the conversation. Therefore, our intention is for the questions and outputs from the workshop to be used by the broader community to move the discussion forward. Follow these links to the workshop agenda and two case studies (here and here).

Some needs that workshop participants identified include:

  • Mapping the wider AI and human rights landscape and the power relations between stakeholders. There is a need to identify strategic entry points to engage with government policymakers and AI systems researchers. For example, submitting public comments on industry or government related AI debates and decision-making.
  • Building up the AI and human rights literature and assembling more case studies to highlight AI’s impact on human rights. There is currently a knowledge gap in this area as AI is often seen through an ethical lens and less so from the perspective of human rights.
  • Creating strategies for computer and social sciences disciplines to engage with human rights and vice versa. It’s necessary to identify the opportunities for collaboration and network building, such as connecting the Fairness, Accountability, and Transparency and human rights communities.
  • Designing ways to build trust and scale capacity for human rights organizations, AI developers, engineers, and the private sector. Finding a way to include smaller AI companies and entrepreneurs. In addition, including non-tech companies in the AI and human rights debates and making the case for human rights impact assessments.
  • Identifying opportunities for AI systems to have a positive impact on advancing human rights and the accompanying safeguards to protect against unintended harm.
  • Developing additional resources to inform and guide the broader community on the current realities and potential future of AI and human rights. Drawing from existing research from the responsible data and humanitarian communities. Engaging with existing multi-sector AI associations.
  • Convening similar workshops or conferences on a more regular basis, particularly in the Global South, which are more inclusive. Engaging more thoughtfully with centers of power for AI development such as China.

Featured contributors in the Artificial Intelligence and Human Rights blogpost series include:

Additional posts and reflections are forthcoming. If you have any ideas or wish to collaborate, please reach out:

Mark Latonero, PhD, Lead, Data & Human Rights, mark@datasociety.net

Melanie Penagos, Research Analyst, Data & Human Rights, melanie@datasociety.net

--

--

Mark Latonero
Data & Society: Points

Lead, Data & Human Rights, Data & Society Research Institute; Fellow at UC Berkeley, USC Annenberg School & Leiden University