The Future of Human-Centered AI: Governance Innovation and Protection of Human Rights

Global Digital Policy Incubator
Stanford's GDPi
Published in
11 min readApr 24, 2019

On April 16, 2019, Stanford’s Global Digital Policy Incubator (GDPi), in partnership with 14 organizations from civil society, academia, and international organizations, organized a one-day conference titled The Future of Human-Centered AI: Governance Innovation and Protection of Human Rights. This was the second annual edition of an event that brings together experts from a wide range of sectors to discuss the potential benefits and risks of AI, showcase tools that mitigate its harmful effects, and explore the application of the international human rights framework to AI and machine learning.

In her opening remarks, GDPi’s Executive Director Eileen Donahoe outlined the emerging risks of AI-based systems and the benefits of the universal human rights framework as a foundation for addressing them.

Read her full remarks below and watch the video here.

We are at an important juncture when it comes to building public trust in AI.

On one hand, the beneficial potential of AI for society may be incalculable. But over the past year, public alarm has increased dramatically with respect to a range of harmful applications of AI: algorithmically generated deepfakes, authoritarian uses of facial recognition, and lethal autonomous weapons that could be deployed without the constraint of the rule of law — and beyond human control.

Even when AI is applied for clearly beneficial purposes, however, it can still have significant negative consequences, ranging from labor displacement to loss of privacy to the reification of bias and discrimination.

In an even deeper vein, reliance on AI can also undermine more abstract values such as fairness, transparency, and accountability by effectively taking humans out of decision loops. For example, governing actors increasingly rely on machines for decisions that impact citizens’ rights related to bail, parole, policing, employment, and eligibility for social services — but often without understanding the basis for those decisions. The lack of scrutability into machine decisions due to the inherent opacity of algorithmic systems is a direct challenge to the core concept of democratic accountability. Enhancing human understanding of the basis of machine decisions will be essential to ensuring such accountability.

The bottom line is that if we fail to address the range of significant risks of AI for people and society, we will not realize its beneficial potential either. Instead, the backlash from citizens, employees, consumers, and governments will take over. And it will be justified.

But here it is necessary to add one other layer of complication to this already complex picture. While we are justifiably focused on the risks of AI-based systems that are already deployed throughout our own societies, we must remain cognizant of the fact that 50% of the world’s population is not yet even digitally connected. That lack of connectivity means that half of the planet is not being included in the AI revolution at all. Lack of inclusivity in AI for so many people around the world will not only further exacerbate global economic inequality, but also create a broad array of new digital divides.

Sharing the beneficial potential of AI must be seen as a human rights priority. Our job is to figure out how to capitalize on AI for people everywhere while also protecting against risks we already know about — and making sure we do not spread those risks to others.

So how do we protect against this wide range of risks? As many of you know, Stanford University recently launched a major new strategic initiative on Human-Centered AI. One of the three core substantive pillars of that larger initiative is the human impact of AI. Today’s event is geared directly toward that focal point — the impact of AI on people and society.

We believe the existing universal human rights framework is the best starting place for developing global policy related to AI on four important dimensions.

First, human rights principles are universally applicable, which makes them well-suited to a global digital environment. Human rights rest on the belief in the inherent dignity of the human person, and inhere in every person by virtue of their humanity. This foundation is an important starting point for thinking about the human impact of AI.

Second, the human rights framework provides a basis for assessing the wide spectrum of risks already associated with AI, related to privacy, accountability, labor displacement, and the risk of embedding bias and discrimination in current and future products.

Third, human rights principles provide a governance framework that outlines the obligations, roles, and responsibilities of both governments and the private sector with respect to protecting and respecting human rights.

Fourth, and perhaps most importantly, the existing human rights framework enjoys global recognition and status under international law. The founding documents of the human rights system were drafted after the crisis of World War II and agreed to through international negotiations.

While there are many emerging initiatives seeking to develop ethical principles for AI, the existing human rights framework already enjoys a level of global legitimacy and recognition that is unmatched — and probably unmatchable — in our current geopolitical context. The combination of universal applicability and global recognition makes human rights peculiarly well-suited to this global digital moment.

The primary goal of this event is to reinforce reliance upon the existing international human rights framework by governments, the private sector, as well as technologists and advocates all work to generate responsible policies and governance innovations related to AI. Our hope is that, with this event, we can help connect people at Stanford with international stakeholders, help facilitate deeper reflection, and encourage joint problem-solving to ensure that human rights are protected in our AI-driven society and the benefits of artificial intelligence are widely shared.

For more in-depth analysis by Eileen Donahoe and GDPi Associate Director for Research Megan Metzger, please see their recent article in the Journal of Democracy.

The opening remarks were followed by a keynote conversation with Michelle Bachelet, United Nations High Commissioner for Human Rights, and Brad Smith, President and Chief Legal Officer of Microsoft. Moderated by Eileen Donahoe, the conversation centered on integrating universal human rights values into the development and deployment of artificial intelligence tools.

The two panelists highlighted the importance of multi-stakeholder, cross-sectoral discussions on artificial intelligence, involving not only engineers and designers, but also scientists, ethicists, marginalized groups, and the entities and institutions that ultimately deploy AI tools. The conversation covered a variety of potentially malign uses of AI, including shrinking the civic space, discrimination, and hidden threats to privacy. The panelists also discussed new approaches to policy development surrounding contentious uses such as facial recognition, and outlined the areas in which more work on enhancing gender inclusion is needed.

Listen to the full conversation below.

L to R: Jessie Brunner, Julia Dressel, Judy Estrin, Londa Schiebinger.

Next, three speakers delivered mini-keynotes covering different aspects of AI, including the opportunities and challenges of addressing different forms of bias. The session was introduced by Jessie Brunner, Senior Program Manager at the WSD Handa Center for Human Rights and International Justice at Stanford.

Londa Schiebinger, Director of the Gendered Innovations in Science, Health & Medicine, Engineering, and Environment project at Stanford, analyzed several examples of tools and scientific studies that relied on biased, output, or algorithms to produce their results. She presented a map of solutions that address gender and ethnic bias in particular. Julia Dressel, a software engineer at the non-profit Recidiviz, drew attention to the dangers of applying AI and ‘black-box algorithms’ to the criminal justice system. She described a Recidiviz project that revealed that specialized prediction software was no better than humans at correctly estimating the chances of recidivism. Judy Estrin, Silicon Valley pioneer and CEO of JLABS, presented an overview of the dangers of AI-based models for basic human rights and the importance of prioritizing friction over growth, progress, and scaling in defining a human-centered approach to technology.

Watch the full segment here.

L to R: Kip Wainscott, Marc Warner, Karina Halevy, Fabro Steibel.

The next panel featured three lightning talks covering AI and the information space, focusing on specialized tools built to address specific issues. The discussion was facilitated by Kip Wainscott, Senior Advisor at the National Democratic Institute for International Affairs.

Marc Warner, CEO of Faculty, outlined his company’s strategies to address the emerging threat of ‘deepfakes’ — machine learning techniques that enable the manipulation of faces and voice patterns in video and speech. The rapid democratization of the underlying technology creates new spaces for abuse by malign actors and may lead to a future in which the foundations of trust are undermined. Karina Halevy, an alumna of Stanford’s AI4ALL program and founder of LingHacks, presented Humanly — a Natural Language Processing (NLP) tool that she built to combat cyberbullying and decrease levels of toxicity on social media platforms. She detailed the goals, data, model, results, and take-aways from the creation and implementation of the tool. Finally, Fabro Steibel, Executive Director of ITS Rio (Institute for Technology & Society in Rio de Janeiro), introduced the audience to Pegabot, a bot detection tool that assesses the probability of a social media account being automated. He described the usefulness of the tool in supporting journalists, NGOs, and responsible, evidence-based regulation.

Watch the full segment here.

L to R: Jovan Kurbalija, Lorna McGregor, Andy O’Connell, Peter Micek, Kit Walsh.

The fourth panel was titled “Protecting Privacy, Free Expression, and Democracy” and featured a multi-stakeholder set of speakers representing intergovernmental organizations, the private sector, and civil society. The panel was moderated by Lorna McGregor, Professor of Law and Director of the Project on Human Rights, Big Data, and Technology at the University of Essex.

Jovan Kurbalija, Executive Director of the UN Secretary-General’s High-Level Panel on Digital Cooperation, framed current discussions on technology and society in the context of past interpretations of power and surveillance. He encouraged all stakeholders to balance the pursuit of progress with the ‘right to be imperfect’ — to choose to be inefficient in ways that make us human. Peter Micek, General Counsel at Access Now, reiterated that the human rights framework is the appropriate backstop for the field of AI, pointing out that ethics alone cannot provide a cross-national policy vehicle. He also advocated for the inclusion of ‘little tech’ in the conversation as a distinct category of private-sector actors. Andy O’Connell, Head of Content Distribution and Algorithm Policy at Facebook, described three areas of the company’s approach to improving its content ranking and news feed policies: transparency, appeals, and consultation. He explained the steps the company has taken in each category, ranging from publishing Facebook’s internal content moderation standards to a planned multi-stakeholder external oversight board. Finally, Kit Walsh, Senior Staff Attorney at the Electronic Frontier Foundation, talked about the obligations of democratic institutions in regard to using algorithmic tools and the process of building new institutions better suited to the questions these technologies produce. Governments purchase and use AI tools without transparency, analysis, or public comment on the policies embedded within those tools. The opacity of black-box algorithms, she said, directly contravenes fundamental rights, particularly given their broad impact.

Watch the full segment here.

L to R: Jason Pielemeier, Kathy Baxter, Chloe Poynton, Steve Crown, Ebele Okobi, Jamila Smith-Loud.

The penultimate panel, titled “Human Rights by Design: Private Sector Responsibilities,” covered a variety of private-sector perspectives on AI. The panel was moderated by Jason Pielemeier, Policy Director at the Global Network Initiative.

Kathy Baxter, Architect of Ethical AI at Salesforce, pointed out the challenges stemming from the proliferation of ethical AI tools and the uncertain metrics of success in implementing ethical frameworks. To address this issue, Salesforce ran a survey of companies in the Bay Area and found large gaps between companies that had only created ethical tools and those that had implemented them and measured their effectiveness. Chloe Poynton, Principal at Article One Advisors, and Steve Crown, Vice President and Deputy General Counsel for Human Rights at Microsoft, described the process and implications of conducting Microsoft’s human rights impact assessment of AI — the first of its kind. Jamila Smith-Loud, User Researcher for Trust and Safety at Google, discussed Google’s AI Principles, the ‘red lines’ of AI applications that Google will not pursue, and the partnerships and diverse inputs that define Google’s approach to developing its ethical framework. Finally, Ebele Okobi, Director of Public Policy for Africa, Middle East and Turkey at Facebook, underscored how Facebook’s participatory approach to public consultation was designed to contrast with the exclusionary colonial context in which human rights norms were first developed. She described the rules governing Facebook’s requests for input, including prioritizing civil society groups and continual awareness of how much the usage of technology varies, even across a single region.

Watch the full segment here.

L to R: Hon. Mariano-Florentino Cuéllar, Lynne Parker, Michael Brown, Edward Santow, Dunja Mijatović, Henri Verdier, Casper Klynge.

The event closed with a panel moderated by the Hon. Mariano-Florentino Cuéllar, tackling “Government Regulation, National Strategies, and the Geopolitics of AI.”

Casper Klynge, the Tech Ambassador of Denmark to Silicon Valley, described the role of his office as a bridge between government and industry, spurred by the fact that governments are trailing behind the development of new technologies. This, combined with the potential for AI to alter the balance of geopolitics, creates a need to work closer with the companies that design these tools. Henri Verdier, Ambassador for Digital Affairs of France, reemphasized that the AI arms race is led by a small number of large powers, wresting the power to contribute to the future of AI policy from the hands of ordinary citizens. Dunja Mijatović, Commissioner for Human Rights at the Council of Europe, called attention to countries that undercut the principles enshrined in the international human rights framework in their approach to digital technology. Edward Santow, Human Rights Commissioner at the Australian Human Rights Commission, described the results of his public consultation in Australia, which revealed that citizens are increasingly aware of the human rights that are at stake in the development of AI. Michael Brown, Director of the Defense Innovation Unit at the U.S. Department of State, contrasted the approach of liberal democracies to amplifying awareness of the harmful uses of AI with the top-down approach of non-democratic regimes, some of which integrate all sectors in pursuit of AI as a tool to enabled the ruling party to stay in power. Finally, Lynne Parker, Assistant Director for Artificial Intelligence at the U.S. White House Office of Science & Technology Policy, closed the panel with a call for nuance, enumerating some of the positive uses of AI and describing the steps taken across government agencies to execute on President Trump’s Executive Order on Maintaining American Leadership in AI.

Watch the full segment here.

The “Future of Human-Centered AI” event was organized in partnership with the following co-sponsors: Access Now, ARTICLE 19, Global Partners Digital, Global Network Initiative, National Democratic Institute, Stanford AI Lab (SAIL), Stanford AI and Law Society (SAILS), Stanford CS+Social Good, Stanford Digital Civil Society Lab, UN High-Level Panel on Digital Cooperation, UN Office of the High Commissioner for Human Rights (OHCHR), University of Essex Human Rights, Big Data and Technology Project, WSD Handa Center for Human Rights and International Justice, and XPRIZE.

For future news and notifications about upcoming events, please subscribe to GDPi’s mailing list.

--

--

Global Digital Policy Incubator
Stanford's GDPi

Stanford’s GDPi is a multi-stakeholder collaboration hub to inspire policy innovations that enhance freedom, security, & trust in the global digital ecosystem.