Urs Gasser
May 18, 2017 · 4 min read

Earlier this week, the Berkman Klein Center in collaboration with the MIT Media Lab hosted an AI Advance, a community event to kick off our collaboration as part of the recently launched Ethics and Governance of Artificial Intelligence initiative (learn more about our current work plan here). In a fascinating and diverse series of lightning talks, faculty, fellows, students, and staff from both Centers presented their ongoing work and addressed major research questions around AI and related technologies. A few highlights are available here, here, and here (more to come soon).

In the latter part of the event, participants engaged in a series of self-organized breakout sessions, focusing on an equally rich set of issues related to the ethics and governance of AI. The breakout session I attended focused their questions on whether the emerging field of AI ethics and governance was ready for empirical research (with a focus on social sciences). Acknowledging the long history of AI in computer science, the group discussed whether current research efforts should primarily focus on “theory,” given the relatively nascent state of the field from the perspective of interdisciplinary research.

Overall, the group agreed that research was indeed beneficial. Research on a new tech helps people to, first, understand the varied relations others might have to the technology, and second, build awareness by spotlighting the technology and calling attention to its implications. Therefore, at least certain forms of empirical research aimed at informing the discussion about the ethical uses and emerging forms of governance of AI is possible and desirable. Some plans are already underway.

From there, participants debated the conditions that must be met in order to make empirical research on AI practically possible and intellectually meaningful. The design of these investigations would build upon insights and experiences gained from behavioral research on previous technological phenomena.

In the discussion, the following contours of a draft taxonomy started to emerge. It is, of course, a work-in-progress, but might stimulate further conversations about the role of research and methods for understanding AI governance:

  • Topics and subjects: The discussion revealed a broad set of research questions related to AI which might benefit from empirical research, including questions related to the human use of AI-based technical systems (e.g. how do we interact with AI-powered personal assistants?), human attitudes and values towards such technologies (e.g. what are young users’ attitudes towards autonomous vehicles?), the design and performance of AI-based systems (e.g. how do AI-based systems change our news ecosystem), and questions about the functioning of different forms of governance (e.g. how do machine learning approaches aimed at moderating harmful speech online perform?), among others.
  • Methods: The group discussed available and suitable empirical research methods in our breakout discussion. We examined pre-conditions required for quantitative methods such as surveys or large-scale data analyses, and also discussed the horizon-setting benefits of qualitative methods like focus group interviews. We also discussed observational and experimental methods. We addressed issues that can arise when subjects and researchers do not have a common understanding or vocabulary to address AI, which could impact research methods (e.g. how do we ask survey questions relevant to AI ethics if awareness of the technology is low?).
  • Data and infrastructure: Researchers also addressed the growing problem of data that might fuel interesting empirical research that can inform the debates about the ethics of AI is proprietary information controlled by commercial entities — particularly large technology companies with own research teams — and often not easily accessible to academic researchers. More broadly, the group discussed the necessary infrastructure (including procedures such as IRB) and, relatedly, the resources needed to enable cutting-edge empirical research into AI.
  • Mindset: The participants in the breakout session also highlighted that empirical research in the rapidly evolving field of AI and related technologies require a particular mindset among researchers, combining passion and interest with the ability to deal with high degrees of ambiguity and uncertainty, as many of the research questions and problems are currently not well-specified.

A final point concerned the speed and global scale of AI development and adoption. Both will increase the pressure on researchers and academic research institutions to work strategically and collectively on the issues at hand if the fruits of their efforts should remain relevant and inform the ways in which users, companies, regulators, and society interact with AI systems. The Berkman Klein Center and MIT Media Lab collaboration seeks to contribute to the formation of such a platform for collaborative, empirical research, learning, and experimentation.


News, ideas, and goings-on from the Media Lab community

Urs Gasser

Written by

Executive Director @BKCHarvard — Disclosures https://cyber.harvard.edu/about/support and http://hls.harvard.edu/faculty/directory/10298/Gasser


News, ideas, and goings-on from the Media Lab community

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade