AI Advance

Berkman Klein Center
Berkman Klein Center Collection
11 min readJun 1, 2017

A Community Convening at Harvard Law School to advance the Ethics and Governance of Artificial Intelligence Initiative

By David Talbot

Together with the MIT Media Lab, the Berkman Klein Center for Internet & Society at Harvard University recently launched the Ethics and Governance of Artificial Intelligence Initiative and the two now serve as anchor institutions for a fund aimed at developing activities, research, and tools to ensure that fast-advancing AI and related technologies serve the public interest.

Amber Case, a user experience designer and Berkman Klein fellow, and David Cox, a Harvard computational neuroscientist and faculty associate at Berkman Klein, confer at the AI Advance event.

On May 15, 2017, the Berkman Klein Center in collaboration with the Media Lab hosted “AI Advance,” a convening of 120 community members, including faculty, researchers, students, and fellows, in order to reflect and engage on the societal challenges of AI and related technologies, forge collaborations, and start to design research programs.

The event (see the agenda here) included 10 lightning talks on diverse topics, two sessions on the current concern over how AI systems may be implicated in the spread of false information (including “fake news”), several participant-initiated breakout sessions, and a presentation about an online dashboard being built at the Berkman Klein Center to explore research projects and people active in select AI-related areas.

During AI Advance, the term “AI” was used in a broad sense to describe complex decision-making algorithms fueled by public and private datasets, rather than as a strict computer science term of art. Such technologies are widely employed today to guide corporate processes (such as insurance and financial risk analysis) and some public ones (such as in criminal sentencing in some jurisdictions); fully automated versions govern how news, updates, and advertising are presented to hundreds of millions of people via online social networks.

The event was meant as a community kickoff. The photos, videos, and summaries below capture some of the topics, concerns, and hopes expressed by attendees about the ethics and governance of AI. (The event adds to earlier AI ethics conversations led by Joi Ito, director of the MIT Media Lab, and Iyad Rahwan, associate professor at the Media Lab and an architect of Moral Machine, an interactive tool to explore moral questions that autonomous vehicle AI systems might need to make.) Not all members of the expanding AI community were able to attend. Still, the event offered a window into some of the relevant activities across the Berkman Klein Center and Media Lab that inform the joint AI initiative.

John Palfrey (right), a former executive director of the Berkman Klein Center and chair of the Knight Foundation’s Board of Trustees, with Nadya Peek, an MIT postdoc working on digital fabrication and AI in industrial automation.

“Part of our job here is to figure out: what’s our research agenda? What are a series of research questions that will help us understand what we should really care about in AI? And where we should put a thumb on the scale to affect the outcome?”

John Palfrey

Vikash Mansinghka, a research scientist who heads the Probabilistic Computing Project at MIT’s Computer Science and Artificial Intelligence Laboratory, describes key AI challenges.

“My biggest insight from the AI Advance event was that legal scholars have some of the intellectual tools we need in AI research to define the goals and values we wish AI systems to implement.”

Vikash Mansinghka

Jonathan Zittrain, professor of law at Harvard Law School and co-founder of the Berkman Klein Center, kicks off the AI Advance event.

In the opening segments, big questions were addressed: What are the most important research questions? What is different about AI ethics and governance research? What does AI mean for our understanding of human autonomy? Jonathan Zittrain summarized in this post the major questions posed by using automated processes for decision-making. In greeting AI Advance attendees, he cited as an example current concerns over bias in the AI systems already in use by some U.S. judges to guide parole or sentencing decisions. AI systems can and do reflect human biases, but also have the potential to inject fairness into such processes. Studying these systems will require working with and drawing from industry. And it is likely that early focus areas will include criminal justice, algorithms behind news and information feeds on social media platforms, autonomous vehicles, and international governance.

“There is a real challenge: a lot of the resources, a lot of the work, a lot of the data, a lot of the trade-secret algorithms that might interest us … are in the possession of private companies. We’re not starting from the framework of ‘The ‘free and open Internet — and what’s going to enclose it?’ that might have been the case in 1997. It’s a little bit of a different configuration now — and much more complicated for those of us who want to assess it.”

Jonathan Zittrain

Urs Gasser, Executive Director of the Berkman Klein Center, reported back on a breakout session devoted to advancing empirical research in AI.

Urs Gasser, executive director of the Berkman Klein Center, noted that the event was an important start in bringing together diverse experts and practitioners to choose research topics, develop methodologies, and design AI systems to ensure fairness in implementation. In a subsequent post he reflected on the event and proposed a draft research taxonomy.

“To me this element of speed and scale is perhaps something that changes the AI ethics and governance debate, when compared for instance to 15 or 20 years of conversations about ‘Internet,’ and ‘Internet governance,’ and ‘Internet ethics.’ ”

Urs Gasser

A COMMUNITY DIALOGUE

Throughout the day, several ideas and themes emerged from the community discussion. Some were specific and topical (such as the spread of “fake news” on social media platforms) and others reflected core concerns across disciplines (such as how we democratize AI systems). The topics described below reflect just a few of the areas that the two centers will likely explore in the coming months.

DEMOCRATIZING AI

AI Advance participants noted that many AI systems are built by private entities for private use and profit. Smaller companies often license these systems; public-sector users rely on large private vendors. These practices pose challenges about how to democratize AI for the public good.

Wendy Seltzer (right), strategy lead and counsel to the World Wide Web Consortium, confers with David Weinberger, author and senior researcher at Berkman Klein.

“How do we enable individuals to have access to the same powerful AI tools [used by corporations]? How do we get competition among the models? How do we share data without running into privacy risks? How do we open up some of these opportunities?”

Wendy Seltzer

Nadya Peek discusses her thoughts on AI democratization.

“Are there places where researchers like us can exercise insight into the design of infrastructural systems such that they… become democratic platforms? If we all have to buy into [Google’s] TensorFlow, what does that mean for the politics of the infrastructure?”

Nadya Peek

Leah Plunkett, an associate professor at the University of New Hampshire Law School and Berkman Klein fellow, describes her hopes for how AI can help public school students.

SERVING THE PUBLIC GOOD

How might such democratization work in practice? At certain points during the day, participants discussed possible applications for deploying AI to advance social justice. Leah Plunkett reminded the group that each year in the United States, tens of thousands of young people are expelled from public schools (sometimes for trivial infractions, such as looking at their smartphones too many times). These young people lack any constitutional right to an alternative form of public education. She posed one possible research and community-building challenge: can AI-enabled education technologies give these young people new ways to learn, so that they are not denied any opportunity to learn?

Lionel Brossi (left), a Berkman Klein fellow and assistant professor at the Institute of Communication and Image of the University of Chile, and Sandra Cortesi (right), director of the Youth and Media project at the Berkman Klein Center.

ADDRESSING BIAS AND FOSTERING INCLUSION

Throughout the day, attendees raised concerns over human biases creeping into AI systems as a result of data input choices and other factors. Lionel Brossi said that one way to counter bias in AI systems is to require greater inclusiveness in designing them. This doesn’t just mean diverse teams of developers, but also making sure that diverse perspectives are brought to bear on every question, from whether the underlying datasets are accurate and complete, to how the AI systems are ultimately deployed and for what kinds of ends.

These issues are at the core of projects underway at the Berkman Klein Center including the development of an Inclusion Lab, which will explore how AI systems can help create a more diverse and inclusive society; and a Challenges Forum, a planned series of convenings on specific topics. These issues will also be further discussed at a Global Network of Internet & Society Centers event.

Sasha Costanza-Chock, an assistant professor of civic media at MIT’s Comparative Media Studies/Writing program, discusses what it might mean to make a real effort at eliminating bias from AI systems.

IDENTIFYING A NEED TO EDUCATE LAW ENFORCEMENT

One near-term effort is likely to involve exploration of how AI systems are used in criminal justice. Chris Bavitz, who directs the Cyberlaw Clinic at Harvard Law School, pointed to the need to educate law enforcement officials about how AI technologies are changing the nature of law enforcement and administration of justice. In the United States, education of state attorneys general is particularly important, because legal protections that govern most non-medical personal data — which might be used or exposed by an AI system — are embodied in state, not federal, laws. In addition, AI is increasingly being brought to bear in sentencing, bail, and parole decisions handled at the state level. (This topic may be taken up at a Challenges Forum event.)

Naz Modirzadeh discusses AI and warfare during one of eight breakout sessions led by AI Advance participants.

Taken to its logical extreme, AI could utterly change the nature of war. Naz Modirzadeh, a professor at Harvard Law School and director of the HLS Program on International Law and Armed Conflict, pointed out that today’s growing global arsenal of unmanned (but still largely human-controlled) drones shows that we may not be far from the creation of fully autonomous warfighting machines, raising new and pressing questions on how international law can and should address such possibilities. The Human Rights Program at Harvard Law School is working with Human Rights Watch on a campaign seeking a preemptive ban on the development or use of fully autonomous weapons.

Madeline Elish is a researcher at Data & Society, an institute focused on the social and cultural issues of data-centric technology.

CORPORATE RESPONSIBILITY

Madeleine Elish, a researcher at Data & Society, described how she and Tim Hwang explored corporate attitudes toward designing AI systems in a research publication called “An AI Pattern Language,” a model for future cross-cutting academic-industry research.

The findings were striking: For example, many industry practitioners interviewed by Elish and Hwang said they certainly took into consideration “social implications” of their work, but understood this as a customer-relationship concern. They wanted customers to feel that any AI technology had good intentions, protected privacy, and was reliable. But those same respondents tended to draw a blank when asked whose job it was to think about broader impacts, such as how their technologies might introduce inequality or perpetuate racial bias. At the biggest companies, the report noted, “the responsibility always seems to lie with another department.” At startup companies, resources were limited. The publication went on: “Individuals in very large corporations and small start-ups both expressed that the size of their company limited their role in thinking through the social impact of their work.”

Yarden Katz (left), a fellow in systems biology at Harvard Medical School and a fellow at the Berkman Klein Center, discusses AI’s complex semantics.

SEMANTICS AND ENGAGING WITH THE PUBLIC

Yarden Katz discussed AI’s complex semantics and its “diverse, messy, and philosophical” history. He said it might best to approach the topic of AI ethics and governance not by starting from a technology definition but through core questions: Who controls the code, frameworks, and data? Is any given system open and inspectable, or not? Who sets the metrics — and who benefits from them? Katz suggested that one role for the Berkman Klein Center community would be to curb hype and misinformation about AI that often appears in the mainstream media, and help journalists clarify issues, limitations and biases in AI technologies.

“FAKE NEWS” AND ITS RELATIONSHIP TO AI

One of the deep dives discussed at AI Advance focused on the problem of “fake news” in the context of algorithms. During the 2016 presidential campaign, a great deal of disinformation, including false news stories, flooded social networks. One mechanism for this distribution consisted of automated systems called bots re-tweeting links to false news stories. Jonathan Zittrain posed a possible research question: would it be useful to create a tool that informs people whether a human or a “bot” is distributing a given piece of information online? But more research is needed to determine to what extent AI systems are a core problem.

Ethan Zuckerman, director of the Center for Civic Media at the MIT Media Lab, discusses the unknowns about “fake news” and the larger issues of AI and bias.

“We have a question about news quality and ‘fake news’ where we can’t decide if this is an AI problem or a human problem.”

Ethan Zuckerman

Mary Madden, a researcher at Data & Society, discusses how AI systems are sometimes used in college admissions offices to examine applicants’ social media data.

“One future challenge with the evolution of AI technologies is that we are not only in a world where we’re going to have AI assisting our decision-making, but increasingly making decisions for us and evolving in a way in which we may completely cut the human out of that decision-making process.”

— Mary Madden

Ryan Budish, a senior researcher at the Berkman Klein Center, introduces the under-development AI Compass interactive tool.

BUILDING A TOOL FOR COLLABORATION

Exploring questions of governance and ethics in AI will require making connections among diverse groups. The event showcased a not-yet public prototype tool called AI Compass under development at the Berkman Klein Center. AI Compass will highlight key resources, people, organizations, and events across a handful of select focus areas. Ryan Budish and Amy Zhang demonstrated the tool, which includes a dashboard that features curated networks of people, resources, topics, and events. At the outset, AI Compass includes four categories or “areas” (AI + Governance, AI + Youth, AI + Art & Design, and AI + Asia), with each area developed by a curator. Each person, event, resource, and organization is represented as an individual piece of content; the dashboard draws on neural network imagery to show how all of these pieces link together.

Skhathisomusa Mthembu is a physics master’s student at the University of the Witwatersrand in Johannesburg and employee of CERN, the European Center for Nuclear Research.

“I think it’s very important to dive in and simplify. We need to look for places where we can find miniature problems that have questions and answers. We need to identify first principles.”

— Skhathisomusa Mthembu

Even though AI Advance — at which Amar Ashar, a senior researcher at the Berkman Klein Center, served as master of ceremonies — was meant mainly as a community kickoff, the topics raised during the main sessions suggested themes for targeted research and other efforts to be defined in the coming months. The conversations expanded in eight breakout sessions: Algorithmic Fairness, Justice, and Accountability; AI and Public Narratives; AI and Young People; AI at the Digital-Physical Interface; Advancing Empirical Research on AI; Autonomous Weapons and AI; Emotions, Social Relations, and AI; and AI and Design.

At a wrapup session, David Weinberger, a member of the Fellowship Advisory Board at the Berkman Klein Center (and who live-blogged much of the event), and Charles Nesson, professor of law at Harvard Law School and a founder of the Berkman Klein Center, observed that attendees had gravitated toward issues of fairness and representation in AI and had articulated a strong need to forge collaborations, perform cross-cutting research, and communicate these issues and findings to industry, government, and the public.

Follow the work of the Ethics and Governance of Artificial Intelligence Initiative by visiting our website, following us on Twitter, or subscribing to our newsletter.

At each table, attendees were invited to build their own Lego systems, but none achieved full autonomy.
Some of the AI Advance participants gathered afterward in a courtyard at Harvard Law School.

Photos and video by Daniel Dennis Jones, with contributions from David Talbot. All contents are licensed under a Creative Commons Attribution 3.0 Unported license.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.