Human-Centered AI: Building Trust, Democracy and Human Rights by Design

An overview of Stanford’s Global Digital Policy Incubator and the XPRIZE Foundation’s June 11th event

Stanford GDPi
Stanford's GDPi
10 min readJul 9, 2018

--

by Eileen Donahoe, Executive Director, Global Digital Policy Incubator @FSI @CDDRL @Stanford_GDPi

On June 11, 2018, Stanford’s Global Digital Policy Incubator (GDPi) organized a one-day conference in partnership with the XPRIZE Foundation entitled Human-Centered AI: Building Trust, Democracy and Human Rights by Design. The conference included a series of cross-sector conversations about potential benefits and risks of AI.

In her opening remarks, GDPi’s Executive Director Eileen Donahoe discussed the implications of AI for democratic governance and the enjoyment of human rights. She also highlighted the need for cross-sector dialog and the importance of applying the existing human rights framework in the development of human-centered AI.

You can read her full remarks below and watch the video here.

We’re at an inflection point when it comes AI’s impact on society.

As we have quietly shifted to be a data-driven society, AI has become seamlessly intertwined with many dimensions of our lives. New AI applications are coming as fast as advances in research: AI already shapes how humans interface with technology, with government, with information, and with each other.

We know some kind of strange societal transformation has happened when our computers are asking us to prove to them that we are not the robots.

But what lies ahead will be even more profound, in both positive and negative directions. AI has been likened in its potential to the discovery of fire or electricity. While the dramatic opportunities that could flow from AI are almost unfathomable — so are the risks.

Our focus today will be on the ramifications of widespread deployment of AI for democratic governance and the enjoyment of human rights. Our core challenge will be to find ways to capitalize on the vast potential of AI for the benefit of humanity, while also protecting human beings and humanity as a whole from the downside risks. The purpose of this program is to help figure out how to do that.

Speakers in today’s program will focus on several urgent concerns related to the impact of AI on human rights and democracy:

  • the future of work
  • the risk of AI reinforcing bias and discrimination
  • loss of trust in digital information and disinformation undermining democracy
  • what should constitute the basis of trust in AI-dependent governance decisions

I want to set the stage here by briefly touching on three big questions:

  1. What is human-centered AI?
  2. How does the field of ethics relate to “human-centered AI”?
  3. How can the existing “human rights” framework guide the beneficial application & development of AI?

1. What is human-centered AI?

Human-centered AI has been embraced as an important organizing principle for development and application of AI. It was recently identified by Stanford’s President Tessier-Lavigne and Provost Drell as a new strategic priority.

But “human-centered AI” means many things to many people.

First off — just the “AI” part itself is a multi-faceted concept. For today’s purposes, we’re using “AI” to refer to the full spectrum of different data-driven processes and intelligences: from automated and algorithmic decision-making; to “supervised AI” applied in specific realms; to “unsupervised” machine learning and deep learning that mimics biological neural networks; to even more aspirational artificial general intelligence or “AGI.”

Our primary focus today will be on the “human-centered” part. Our priority will be to get our heads around what we mean when we say that we want to reinforce a “human-centered” approach to development and application of AI.

Here, I want to start by citing Stanford Associate Professor of Computer Science Fei-Fei Li who recently published a New York Times op-ed entitled: “How to Make A.I. That’s Good for People.” She identified three goals that should guide and form the basis of human-centered AI:

Goal 1: reflect the depth characterized by human intelligence.

Goal 2: enhance human capability, not replace it.

Goal 3: be guided by concern about effects on humans.

From my vantage point, Goal 3 — concern about the effects on humans — is the heart of the matter.

Goal 1 — making AI more human-like in its intelligence — is essentially a technological task. The core point I would make about Goal 1 is that it should never be decoupled from Goal 3. Moving to make AI more “human-like,” without considering the effects on humans might not actually be human-centered in the long game.

In a related vein, efforts to make AI “friendlier” to humans or to “democratize AI” by making it more widely available both sound good. But “friendly” and widely available AI could be dangerous to humans if their development or application is decoupled from an assessment of their effects on humans.

Eileen Donahoe at the Human-Centered AI event at Stanford University

Goal 2 — enhancing humans, not just replacing humans — will require much more creative thinking, especially with regards to employment, meaningful work and wealth distribution. Tim O’Reilly will develop this strand in his keynote very shortly.

But, another type of human displacement must be addressed under Goal 2:

displacement of humans from governance responsibility and accountability. The effect of taking humans “out of the loop” in governance can happen when public sector entities rely on AI to make governance decisions that affect people’s rights, but the basis of those decisions are opaque to the governing authorities who rely upon them.

A key theme we will emphasize today is that human-centered AI will require new thinking about democratic accountability for data-driven machine-based governance decisions, as well as richer development of the concepts of algorithmic scrutability and interpretability for governance actors.

The bottom line with regards to all these goals for human-centered AI is that human beings provide the point of reference and basis for evaluating the development and application of AI.

This leads to questions about how human-centered AI connects with the field of ethics.

2. How does the field of ethics relate to “human-centered AI”?

As I currently see it, ethical debate around AI has three primary entry points:

  • Questions about whether AI should be seen as ethical agents or be granted “legal personhood.”
  • Questions about whether humans have ethical responsibilities toward AI/robots.
  • Questions about the ethical responsibilities of people who design, develop or apply AI.

This third strand of ethical inquiry is our primary focus and joins up directly with human-centered AI.

There are a range of new “ethics-based” initiatives out there where technologists are trying to articulate responsibilities of people who design, develop, and deploy AI in product, processes, or policies:

The Asilomar AI Principles; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; The Partnership on AI to Benefit People; Open AI; Microsoft’s AI Principles of Fairness, Accountability,Transparency and Ethics; and just announced on Thursday, Google’s principles for responsible development of AI.

They use slightly different terminology but all revolve around some variation of the concept that AI should incorporate “human values,” reinforce “human dignity, or benefit human beings and humanity. To date, most of these initiatives remain at a relatively high level of abstraction, so it’s hard to know what they might actually require in practice.

This leads to my final practical point about the role our existing universal human rights framework can play in the emerging field of human-centered AI.

3. How can the existing “human rights” framework guide the beneficial application and development of AI?

In her op-ed, Fei-Fei Li said younger AI enthusiasts may be surprised to learn that principles at the heart of today’s deep learning algorithms stretch back to neuroscientific research done more than 60 years ago. In a parallel way, the roots of today’s human-centered AI movement reach back to the Universal Declaration of Human Rights drafted in the aftermath of World War II, and to the body of international human rights law developed in the 70 years since.

We do not need to reinvent the wheel or retreat to vague ideas about the need for ethical reflection: the human rights framework can do a lot of work for us.

Part of the value of the human rights framework stems from the fact that these norms already exists under international human rights law. The texts that make up this body of law have been negotiated and agreed to internationally. Human rights norms are particularly well suited for our globalized, digitized context, specifically because of their universal applicability.

Furthermore, the universal human rights framework provides a rich, practical basis for an evaluation of the effects of AI on humans across a whole spectrum of rights: ranging from concrete concerns about the right to work; to more abstract concerns about the right to privacy; to civic and political concerns about how algorithms shape access to information and free expression; to governance concerns about fairness, nondiscrimination and equal protection.

The framework also provides a basis for evaluating AI applications and whether AI is being applied for good ends — like fulfillment of the UN Sustainable Development Goals, as opposed to more ominous end, like development of fully autonomous weapons that have the potential to go rogue.

If you leave here today with only one take-away I hope it is this: our existing human rights framework is an invaluable lens through which to assess the effects of AI on human beings and humanity.

Let me close by saying development of the field of human-centered AI will not be solely — or even primarily — a technology challenge. It will require insight and wisdom from fields beyond computer science and much more cross-sector, cross-cultural and cross-disciplinary conversation.

We need to mainstream this conversation so that the public is engaged and human-centered AI becomes everyone’s business.

This will be a strategic priority for Stanford’s Global Digital Policy Incubator going forward. Today’s program is just one step in what we hope will be a much longer collaborative effort with all of you, to help make sure the benefits of AI are spread widely and this technology fulfills its potential to serve humanity.

Zeid Ra’ad Al Hussein, Sam Altman, and Eileen Donahoe at the Human-Centered AI event at Stanford University

Following the opening remarks, Zeid Ra’ad Al Hussein, United Nations High Commissioner for Human Rights, and Sam Altman, President of Y Combinator and Co-Chairman of OpenAI, explored the potential benefits and adverse impacts of both existing machine learning processes and general artificial intelligence.

Moderated by Eileen Donahoe, the conversation ranged from the consequences of AI for human employment and the concept of universal basic income, to the importance of a common lexicon between technologists and human rights experts. The two panelists also discussed the human rights implications of AI in developing countries and integrating human rights norms with the day-to-day work of technologists.

You can listen to the full conversation here.

Next, in a keynote entitled: “What’s the Future of Human Work with AI?” Tim O’Reilly, Founder and CEO of O’Reilly Media, Inc., explored the future of work and opportunities for human-machine partnerships in the workforce.

Following the keynote, the first panel, titled “What is Human-Centered Design of AI?” was moderated by Amir Banifatemi, XPRIZE Group Lead, AI and Frontier Technologies. The panel explored the concept of human-centered design of AI. Rochael Adranly, General Counsel of IDEO, introduced the idea of “augmented intelligence” in product design. Jess Holbrook, UX Manager of Google People + AI Research, introduced Google’s new tools and initiatives aimed at minimizing bias and discrimination in datasets. The panel concluded with Deb Roy, Founder and CEO of Cortico, who explained the application of machine learning and network science in media analytics.

Watch the full conversation here.

Rochael Adranly, Amir Banifatemi, Deb Roy, and Jess Holbrook at the Human-Centered AI event

The next panel, entitled “Rebuilding Trust in Digital information,” centered on the trustworthiness of news in the digital information ecosystem. The panel was moderated by Paula Goldman, Vice President and Founding Head of the Tech and Society Solutions Lab at Omidyar Network. Panelists included Tessa Lyon, Product Manager at Facebook Newsfeed; Richard Gingras, Vice President of News, Google; Steve Crown, Vice President and Deputy General Counsel of Microsoft; and Sally Lehrman, Director of the Trust Project. Each panelist explored the role of digital platforms in providing trustworthy information to citizens.

Watch the video of their full conversation here.

The final panel of the day, entitled “Trust and Human-Centered Technology,” was moderated by Larry Diamond, Principal Investigator of GDPi and Senior Fellow at the Hoover Institution and the Freeman Spogli Institute for International Studies. The conversation focused on topics related to “the attention economy,” how to define a “healthy” internet, how to build human norms into robots, and “Gen Z’s” relationship to digital technology. Tristan Harris, Co-Founder and Executive Director of Center for Humane Technology; Mitchell Baker, Executive Chairwoman of the Mozilla Foundation; Bertram Malle, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University; and Roberta Katz, Senior Research Scholar at Stanford University contributed to this panel.

Watch the video of their full conversation here.

This event concluded with three breakout sessions focusing on “Building Democracy & Human Rights by Design.” Daniel Cass, Director of Silicon Valley Initiative, Amnesty International; Zvika Krieger, Head of Technology Policy and Partnerships, the World Economic Forum; and Kip Wainscott, Senior Advisor (Silicon Valley), The National Democratic Institute led the breakout sessions.

For future news and notifications about upcoming events, please subscribe to our newsletter.

--

--

Stanford GDPi
Stanford's GDPi

The Global Digital Policy Incubator is a collaboration hub for the development of norms and laws to enhance freedom and trust in the global digital ecosystem.