Helping Global Policymakers Navigate AI’s Challenges and Opportunities

Berkman Klein Center
Berkman Klein Center Collection
6 min readAug 13, 2018

by Ryan Budish

In 2017, United Nations Secretary-General António Guterres noted the difficult challenge that policymakers, particularly those in the Global South face with respect to AI. He said that “The implications for development are enormous. Developing countries can gain from the benefits of AI, but they also face the highest risk of being left behind.” For example, in Nigeria doctors are using AI to help reduce the incidence of birth asphyxia, a leading cause of under-five death in Africa, and yet at the same time there are real concerns about AI’s impact on rising unemployment and the influence that Google, China, and others are exerting across the Global South.

AI technologies are raising complex social, political, technological, economic, and ethical questions. And policymakers around the world increasingly find themselves at the center of these discussions.

AI technologies are raising complex social, political, technological, economic, and ethical questions. And policymakers around the world increasingly find themselves at the center of these discussions. Policymakers have no choice but to grapple with these emerging challenges, and this requires careful application of the various tools in their governance toolbox. Our work, presented recently at the ITU’s Global Symposium for Regulators, is designed to support Global South policymakers and regulators as they apply this toolbox and try to navigate the difficult tradeoffs with the use of AI technologies while building the capacity necessary to more effectively answer these complex questions.

The pace of change with AI is staggering, with more and more real-world applications every day. And each of new applications creates new unanswered questions, faster than we get answers. There is no checklist that policymakers can follow that will unlock AI’s tremendous potential while limiting its risks. That’s not to say there is an absence of ideas; to the contrary, there’s an ever growing list of opinions on what to do. Some experts have called for the formation of new regulatory agencies that specialize in AI or robotics. Some governments and international organizations have started to write non-binding standards to govern the creation and use of AI. Some companies have released their own ethical guidelines constraining their own use of AI. And multistakeholder partnerships are currently formulating their own best practices for the development and deployment of AI. Policymakers, however, need more than theories, as they feel an urgency to take steps to help their citizens navigate and thrive in an increasingly complex space.

Although it would be convenient if there was a turnkey approach to guide policymakers in addressing AI’s numerous challenges, a series of structural challenges make it difficult at this point in time, and perhaps even misguided, to develop and apply a single approach to governing AI:

  1. Missing information: For most aspects of AI we currently lack a solid empirical understanding of the short- and long-term consequences of the technologies. In many cases, reliable metrics to track societal impacts beyond unemployment and GDP are not readily available — limiting the possibilities for evidenced-based policymaking.
  2. Unresolved foundational questions: In many cases we are still working to identify the “right” questions to ask. For example, when we look at issues like disinformation or hate speech online, we do not yet even fully understand the scope of the problem, let alone have easily deployed policy solutions.
  3. Existing frameworks: AI is being deployed in areas such as medicine, automobiles, telecommunications, and education, which are all spaces with complex, local, national, and international legal and regulatory structures in place. As AI technology continues to rapidly evolve, we’re only just beginning to see where existing frameworks are adequate, where they need tweaking, and where entirely new approaches are needed.

Despite these challenges, in a time of hype, hysteria, and hope about these new technologies, policymakers need to respond. In the absence of ready made policy approaches, what are they to do? As part of the Ethics and Governance of AI Initiative at the Berkman Klein Center, this past year we’ve focused on three key areas of research in order to help policymakers:

  1. Bridging information gaps: In the AI ethics and governance space, there’s a lot of focus (and rightly so) on infusing more technical knowledge into policymaking. For much of the last year, however, our focus has been on listening. To that end, we’ve convened several dialogues with policymakers and other key stakeholders from around the world with meetings in the US, Seoul, China, Hong Kong, Switzerland, and Italy. This dialogue series was at its core a listening tour — a chance for us to learn from policymakers about the issues that they are concerned about with respect to AI, about the unique political, social, economic, and cultural factors that will shape the impact of AI within their region, and about the kinds of resources that they need at this important juncture. Beyond listening, it has enabled us to build a more complete picture of existing frameworks we can build upon, and areas where further research or action is needed.
  2. Building toward evidence-based decisionmaking: The absence of robust data on the societal impacts of AI technologies is a major limiting factor for policymakers. Through a variety of measures, we’ve been working to help improve the quality, consistency, diversity, and interoperability of available data. For example, through partnership with the Tsinghua University and the AI index, we have advanced the interoperability between Chinese and US measurements of AI development and impact. We also led the “Data for Good” track at the ITU’s AI for Good Symposium this spring, creating a framework for a Data Commons that would improve the availability of high-quality, open datasets. And we’ve developed a human rights framework for AI impacts to improve foreign policy decisionmaking.
  3. Empowering local scholarship: Part of the challenge of AI governance is that so many of the challenges and opportunities are inherently local; a one-size fits all approach simply does not reflect those local realities. So as part of our listening tour, we’ve also supported the development of local work on AI governance. For example, we advised Singapore Management University School of Law in the launch of their Center for AI & Data Governance. And our work on global diversity and inclusion in AI has led to proposals for a fund to support AI research in the Global South.

What we heard through our dialogues was that many regulators did not want or need a grand governance framework for AI — they wanted practical steps to begin to close some of the many gaps they see and that worry them.

At the ITU’s 2018 Global Symposium for Regulators, we had the opportunity to share our discussion paper “Setting the Stage for AI Governance: Interfaces, Infrastructures, and Institutions for Policymakers and Regulators,” as part of the AI for Development paper series, with ICT regulators, particularly many from the Global South. This paper directly builds on the listening and dialogues we have had over the past year as part of the Berkman Klein Center under the Ethics and Governance of Artificial Intelligence Initiative.

What we heard through our dialogues was that many regulators did not want or need a grand governance framework for AI — they wanted practical steps to begin to close some of the many gaps they see and that worry them. They want to know how to close the knowledge gaps between industry and policymakers, so that they can make more informed decisions that unlock the potential of AI and mitigate its risks. They want to know how they can begin to close the inclusion gaps, so that technologies used in their cities and countries are not just designed by and for people in Silicon Valley and Beijing, and instead reflect their own unique populations and cultures. And they want to know they can begin to close the competitiveness gaps, so their own citizens can begin to develop the next generation of AI technologies.

The discussion draft presents a starting place for policymakers and regulators as they try to close those gaps, offering concrete steps and tools that can help. It was a great honor to be able to share that work with senior policymakers from around the world and we look forward to further iterating on these recommendations as we continue to listen and learn from the experiences of policymakers as they work to ensure that the promise of AI is fully realized in every community and not just a few.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.