Advocating for Human Dignity, Human Rights and Inclusive Societies

GDPi at the 2019 UN Global Summit on AI for Good

Global Digital Policy Incubator
7 min readJun 8, 2019

By Eileen Donahoe (@EileenDonahoe)

The Global Digital Policy Incubator had the incredible opportunity to curate one of the five vertical work days at the 2019 UN Global Summit on AI for Good at the International Telecommunications Union in Geneva on May 28–31, 2019.

In our track, we had a distinct purpose and asked participants to shift gears from the other tracks at the Summit. Rather than focus on advancements in AI technology itself or new applications of AI for good, our job was to assess the human impact and risks of AI, as well as its implications for human dignity and inclusive societies. Our track covered both normative and practical challenges that must be addressed by anyone and everyone who hopes to deliver on any aspect of the promise of AI for good.

On the one hand, the rationale behind the Summit was to support those two core long-term goals — upholding human dignity and building inclusive societies. All of the participants had been drawn to the Summit with the shared hope that AI could be put to use to solve some of the world’s most intractable problems and help to achieve the UN Sustainable Development Goals (SDGs).

On the other hand, we asked participants to acknowledge that, along with the tremendous potential for good, there is a whole spectrum of risks associated with AI, ranging from labor displacement to erosion of privacy, liberty, and human agency; from the reification of bias and discrimination to loss of fairness in decisions that impact people’s rights. Some of the darkest concerns relate to threats to human life from autonomous lethal weapons, and the corresponding loss of human accountability for those life-or-death decisions.

The core point that we underscored in the track: it would be a tragedy if AI was deployed “for good,” for the purpose of serving the most vulnerable, but ended up putting those populations at greater risk because we did not think through the implications in advance. Simply put, the goal of the track was to expand our awareness of — and our sense of responsibility for — the wide range of risks associated with AI for people and societies.

Our thesis was that AI is being developed and deployed so rapidly — at least in the connected and highly digitized parts of the world — that we have not had a chance to think through its human implications. Even the technologists working to apply AI for good may not adequately understand the risks associated with their own work. Similarly, governments that rely upon AI for decisions that impact citizens’ rights often do not understand the basis for those outcomes or the implications of their reliance on machine-driven decisions for democratic accountability.

One of the core challenges that must be tackled is the need for much greater inclusion in the technology community itself. We won’t inspire trust in AI-based decisions if we fail to address both the lack of diversity in data and the lack of diversity in the coding community itself.

Accordingly, the track raised themes that could be brought into every other track of the program, including conversations on AI in health, education, agriculture, space, climate, medicine, poverty alleviation, and other fields. The concerns outlined above are the primary questions that must guide the community of experts in each of these cases.

We all bear responsibility for protecting society against the unintended harmful effects of AI on humans; for making sure that AI is utilized to enhance human dignity rather than to undermine human rights; for ensuring that AI-based decisions are unbiased and fair; and for bolstering efforts that allow the benefits of AI to spread widely so that existing wealth inequality and digital divides are not exacerbated. The Summit provided a great opportunity to address these challenges.

We broke the track into four overarching themes:

  1. AI and Digital Identity — Essential Elements of Good Digital Identity Systems: Digital identity schemes are already being deployed around the world for many socially beneficial purposes such as expanding access to social services, enhancing financial inclusion, advancing public health, and improving delivery of humanitarian aid. Digital identity systems could be the key to inclusive growth, and even has the potential to solve one of the world’s most intractable problems: the lack of legal identity for over 1 billion people on the planet. But these systems can put people at risk if they are not developed and deployed in user-centric ways that protect people’s rights. The segment explored the essential elements of a user-centric model for digital identity and showcased the World Economic Forum’s Platform for Good Digital Identity. The segment was led by Manju George (Head of Platform Services, Digital Economy & Society, World Economic Forum) and featured Carlos Moreira (Wisekey), Thea Anderson (Omidyar Network), Marten Kaevats (Government of Estonia), Carmela Troncoso (EPFL), Vincent Graf Narbel (ICRC), and Natalie Smolenski (Learning Machine).
  2. Protection of Vulnerable Populations from AI-Related Risks and Inclusion of Minority Groups: The segment included expert discussions on core commitments to equal protection and non-discrimination principles, challenges associated with gender and racial diversity in data and coding, and special responsibilities to protect the rights and needs of children. It also focused on the inclusion of minority groups in the design and deployment of AI. The session included pitches about two break-out projects: a UNICEF project on special responsibilities in protecting the rights and needs of children (presented by Steven Vosloo, UNICEF) and the Technoladies project, founded by 17-year-old Ecem Yılmazhaliloğlu and incubated by AI4ALL, which showcased what it looks like to make tangible progress toward the goal of including young, diverse women in the coding community. The segment was led by Brandie Nonnecke (Founding Director, CITRIS Policy Lab, University of California, Berkeley) and Steven Vosloo (Digital Policy Specialist, UNICEF). Kathy Baxter (Salesforce) and Rebeca Moreno Jiménez (UNHCR Innovation Service) also presented their work in this segment.
  3. Enhancing the Quality and Diversity of the Digital Information Ecosystem: This segment focused on how AI affects freedom of expression and access to information, looking through the lens of UNESCO’s ROAM (Rights, Openness, Access, and Multi-stakeholder Engagement) framework. We brought attention to the impact of algorithmic information feeds on human agency, choice and personality. The segment was led by Bhanu Neupane (Program Specialist in Communication and Information, UNESCO) and featured Kathleen Siminiyu (Africa’s Talking), Nick Bradshaw (Cortex Ventures), Nigel Hickson (ICANN), Frits Bussemaker (Institute for Accountability and Internet Democracy), and Francesca Rossi (IBM).
  4. Human Dignity and Inclusive Society in Practice: In the closing segment of this track, we looked at a variety of ways in which AI can be used to reinforce human dignity and human rights in practice, and discussed governance and regulation of AI through a human rights lens. The session included a mini-keynote by Jovan Kurbalija (Executive Director and Co-lead, UN High-Level Panel on Digital Cooperation), a demo of how AI can be used to detect and combat “deep fakes” by Marc Warner (CEO, Faculty) and a lightning talk about how to embed human norms into robots by Bertram Malle (Brown University). The session also included a closing panel on how to utilize the existing universal human rights framework to govern AI, featuring a range of civil society actors, including Wafa Ben-Hassine (Access Now), Malavika Jayaram (Digital Asia Hub), Mark Latonero (Data & Society), Regina Surber (ICT4Peace/Zurich Hub for Ethics and Technology), and Megan Metzger (Stanford GDPi). Roya Pakzad (Stanford GDPI) helped curate this segment and served as data rapporteur.

The bottom-line message that we underscored in this track was that we will not make real gains toward the SDGs or capitalize on the vast potential of AI for good if we fail to protect people from the potential harms of AI. AI cannot be “for good” if it does not protect human dignity and human rights, and AI will not be “for good” if the communities of data scientists, coders, and policymakers engaged in AI projects are not diverse and inclusive. It is our shared responsibility to figure out how to simultaneously expand access to the benefits of AI while protecting human dignity and human rights.

For the full program of the track, please see here. For footage from the event, see here.

For a Medium post on the segment led by Brandie Nonnecke and Steven Vosloo, please see here.

For future news and notifications about upcoming events, please subscribe to GDPi’s mailing list. Follow GDPi on Twitter at @Stanford_GDPi.

--

--

Global Digital Policy Incubator

Stanford’s GDPi is a multi-stakeholder collaboration hub to inspire policy innovations that enhance freedom, security, & trust in the global digital ecosystem.