Q&A: Jessica Fjeld on a New Berkman Klein Study of AI Ethical Principles

Author discusses new paper Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI

Berkman Klein Center
Berkman Klein Center Collection

--

Over the past several years, a number of companies, organizations, and governments produced or endorsed principles documents for artificial intelligence. The proliferation of these documents inspired a Berkman Klein research team to delve into the details and map their findings, which illustrate convergence around eight specific themes.

The Principled AI visualization is arranged like a wheel. Each document is represented by a spoke of that wheel
The Principled AI visualization is arranged like a wheel. Each document is represented by a spoke of that wheel and labeled with the sponsoring actors, date, and place of origin. Designed by Arushi Singh and Melissa Axelrod.

With a sample of documents and support from staff and students in BKC’s Cyberlaw Clinic, the team analyzed 35 principles documents from around the world; their findings are published in the latest report in the BKC research series: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI.

Principled AI joins BKC’s ongoing work on the Ethics and Governance of AI, which spans issues ranging from algorithms and justice, harmful speech, global governance and inclusion, and more.

We spoke with Jessica Fjeld, the lead author on Principled AI, Assistant Director and Clinical Instructor at the Berkman Klein Center’s Cyberlaw Clinic, and Lecturer on Law at Harvard Law School, to learn more about the research.

What motivated you to start this work?

In early 2018, I felt like I was hearing about a new set of AI principles every other day. They were coming in waves, and I was collecting them in a clutter of open browser tabs and printouts. The experience was frustrating — as I skimmed each new document, I could recognize that there was some recurring vocabulary, but how similar was it really? And what were the big outliers? I felt the need for some kind of yardstick pretty viscerally.

At the same time, colleagues at Berkman Klein had just published Artificial Intelligence & Human Rights: Opportunities & Risks, and I was considering a follow-up project that would map the intersecting rights-based and ethics-based conversations about the future of AI. Nele Achten, a Berkman Klein affiliate and PhD in Law candidate at the University of Exeter who I was lucky enough to be able to bring on as a research assistant, suggested that we combine the two projects, and map AI principles. The success of this project is attributable in large part to the incredible team of researchers, writers, and designers — Nele, of course, and Adam Nagy, Hannah Hilligoss, Madhu Srikumar, Melissa Axelrod, Arushi Singh, Maia Levy Daniel, Josh Feldman, Sally Kagay, and Justina He.

What was the most surprising part of Principled AI?

The most surprising thing was that, even though the conversation around ethical and rights-respecting AI is pretty new, there’s significant agreement about what it means. I came into this project expecting to find a lot more divergence than convergence — in fact, early drafts of the white paper had “divergence” in the title too, until we realized that the data just didn’t bear it out. Our headline finding is the existence of eight key themes — Privacy, Accountability, Safety and Security, Transparency and Explainability, Fairness and Non-discrimination, Human Control of Technology, Professional Responsibility, and Promotion of Human Values. Many of these themes are represented in close to 100% of the documents we looked at, and the more recent principles documents tend to hit them all.

We’d arrived at these themes before the OECD and G20 principles were published, and when I first read them, it was incredible to see the same concepts reflected back at us. Same goes for other recent principles, including the binding principles that the White House Office of Science and Technology Policy published for comment last week.

Your team asked for feedback when you published the rough draft of Principled AI in June. Could you describe the initial reception to the research, and how feedback shaped the end product?

We were floored by the enthusiasm with which the draft data visualization was met. While we were designing the project and pulling together the dataset over the spring, we were aware that the topic was newsy but our work was pretty heads-down, deep in developing the dataset that underlies our findings, hand-coding and reviewing all these documents. Initially, we were also working on a natural language processing analysis that we were as hopeful about as we were the data visualization, but in the end the corpus was too small to reveal much other than the fact that rights-based documents use the phrase “human rights” a lot.

Once we got Arushi Singh involved as our designer, though, and started to see iterations of the visualization, I knew we were on to something. It was incredible to be able to see an overview of all the documents, side-by-side, all at once. Ultimately, that’s what I’d been looking for personally, to get over the muddle of documents. The data visualization functions as a heuristic.

We shared the draft data visualization for RightsCon in Tunisia, and had a crowd around the poster every time it was on display. The tweet I sent out about it got more likes and retweets than anything else I’d ever posted, probably by an order of magnitude. What that showed me is that we weren’t the only ones who had this frustration with the flood of disparate principles documents. Other people also wanted a heuristic to guide them, to find themes, to spot ways forward, and the visualization filled that role.

People could submit feedback through a form on our website, and we got quite a bit, which we integrated into the revisions to the project. Some of it went to individual documents we included, excluded, or swapped out — for example, we included the wrong document from Microsoft in the draft version, a summary rather than their full principles. Other feedback helped with the readability of the visualization itself.

We also benefited immensely from the Berkman Klein community’s engagement with the project, from fellows like Momin Malik, who advised us on our sampling method, to support from Urs Gasser, the executive director, whose work on digital constitutionalism was a significant inspiration. And of course our project coordinators, Hannah Hilligoss (now a student at Harvard Law School) and Adam Nagy were just invaluable at every stage, from conception to the finish line.

From your perspective as a lawyer, what are your key takeaways from this research?

I’m interested in the principles themselves, but legal practice is all about translating high-level rules to practice, and so the question of the implementation of these principles documents is really compelling for me. A significant part of my practice at the Cyberlaw Clinic is technology and human rights work, and it’s interesting to consider how international human rights law will be relevant as a governance regime.

Before we’d done the research, I had expected that the government documents we looked at would be more likely to reference human rights, and the private sector documents less so. I was exactly wrong about that: only a handful of the government principles mention human rights, and almost all the private sector ones do.

What’s behind that? I can’t explain why the government documents omit the reference, but think that many tech companies have stepped up their game with regard to human rights in the last ten years. A number of the companies whose principles we looked at are members of the Global Network Initiative, a multistakeholder organization devoted to protecting freedom of expression and privacy in the sector. I think human rights jurisprudence has the potential to be really useful as we move forward, both in terms of accountability and remedy for harms, and more generally for its examples in balancing competing high-level values, like privacy and security.

How did working on Principled AI change your thinking about AI principles?

The Western-centrism of my view on the governance of AI was challenged. Through this research, I got to range a bit farther that I likely otherwise would have. We wanted a dataset that was diverse across many axes, including stakeholder, time, and geography. Our core research team enabled that, with folks from four continents and native speakers of multiple languages, including Spanish and Chinese.

Concerns about where the key decisions about AI governance are being made are readily apparent. For example, the Japanese document cautions against governance systems that will concentrate political power where the most signficant AI resources are located and the UK’s document talks about wanting to ensure sufficient transparency to allow for their public to influence the direction of the technology that’s ultimately going to impact their lives. And of course many of the private sector entities are operating globally, so we have to consider with their AI principles, just as we do with their content policies, how cultural assumptions may play in to their architecture and where they may or may not serve local ends.

Where should we go from here?

I see this research as a platform for others to build from, and hope that it will be useful to other scholars, people working on the front lines to design and implement socially beneficial AI, as well as policymakers. We’re happy to entertain requests to access our dataset.

More broadly at the Berkman Klein Center, we’re excited to continue engaging with private-sector firms, government, civil society, and others around AI governance. We’ve been privileged to help a number of organizations to troubleshoot the legal, social, and technical challenges that this technology raises, including consulting on the UN High-Level Committee on Programmes and the OECD’s Expert Group on AI. We’ll continue to build on that work in 2020 and beyond. As with past technologies that had the potential to transform aspects of human life and society, the key to success will be collaborative, multistakeholder efforts.

By Carolyn Schmitt

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.