Embracing AI for the Social Good

The Imperative to Engage Globally

Berkman Klein Center
Berkman Klein Center Collection
15 min readDec 14, 2018

--

by Jenna Sherman

On behalf of the Global Governance of Inclusion team at the Berkman Klein Center for Internet & Society, a project of the Ethics and Governance of AI Initiative jointly anchored by the Berkman Klein Center and the MIT Media Lab

AI technologies and the data they rely on can become imbued with the perspectives and values of their creators across their life-cycle, from development to usage. In many instances, as a rich body of research illustrates, creators and primary users of AI-based systems come from more economically developed communities in the Global North. Countless anecdotal examples suggest that this imbalance often results in an overrepresentation of Western, white, or otherwise exclusionary viewpoints within AI-based technologies. The outcome of this dynamic is AI-based technologies that risk perpetuating or worsening biases.

Three small snapshots that emerged from conversations with global stakeholders over the past year illustrate how automated technologies and data have the potential to exacerbate divides across sectors and AI development stages, and how decision makers face difficult decisions when balancing AI’s promises and risks.

Snapshot #1

Our conversations with humanitarian and civil rights organizations have highlighted the challenges that governments and industry face regarding the use of AI-based technologies to aid in humanitarian crises. On the one hand, quickly collecting reliable information in the wake of a humanitarian crisis can be critical for saving lives, but on the other hand, developing a deep understanding of a population’s needs and implementing sound privacy safeguards is often neglected due to urgency. This neglect poses a security concern for some of the most vulnerable groups, many of whom have expressed concerns about the collection of personal information such as data on health and gender-based violence.

Snapshot #2

On the development side, we’ve learned from users of AI globally about how designing AI for seemingly beneficial purposes can lead to inadvertent consequences, particularly when AI technologies either built or based off of data in one region or population are applied to other contexts. In one salient use case, Turkish phrases were typed into Turkish-English Google translate describing a person as having a certain profession. Despite Turkish being a gender-neutral language, the algorithm assigned gendered pronouns to the English translations based on its gendered and biased training corpus. The results were overtly sexist, including phrases such as “he is a soldier” and “she is a teacher,” in lieu of simply using the pronoun “they.” (Google issued a correction to this issue in December 2018, offering both masculine and feminine translations, though no gender neutral option).

Despite Turkish being a gender-neutral language, certain Turkish phrases typed into the Turkish-English Google translate assigned gendered pronouns to the English translations based on its gendered and biased training corpus.

Snapshot #3

Finally, regarding deployment, New Zealand’s recent audit of their own distributed and diverse use of AI across the government, highlights the challenge coordinating procurement across large organizations. From this audit, New Zealand developed a set of recommendations regarding the adoption of AI technologies for the consideration of government agencies and ministers. Their recommendations highlight the complexity of procurement processes and the risks of deploying those into environments that can impact millions of people and their basic rights.

Without truly diverse and inclusive development, we risk building AI that rapidly scales narrow and exclusive perspectives. And the harms can be significant. The three above cases actually seem innocuous compared to some examples we’ve seen in recent years of AI-based technologies and related policies developed by a company or government in one region to regulate AI having serious consequences in another or among specific populations. Facebook’s filtering algorithms, for instance, have recently come under fire for fueling hate crimes against refugees in Germany and fomenting state violence against the Rohingya community in Myanmar, grave consequences that the social media giant failed to anticipate. Though AI policies and strategies, emerging both from the public and private sector — such as Facebook’s — typically intend to govern AI in a manner that reaps technology’s benefits, shared understandings of what constitutes a “benefit” vary by context, making adverse outcomes challenging to predict.

These examples are just a few of many that demonstrate why work on AI must be inclusive across populations, geographic boundaries, and cultural contexts. This insight motivated us to center a significant portion of Berkman’s efforts in our joint ethics and governance of AI initiative on global governance and inclusion. Our work on this global governance and inclusion track commenced with a dialogue series “listening tour” with international policymakers and other stakeholders from civil society, industry, and beyond in primarily Asia and Europe — the US, South Korea, China, Hong Kong, Switzerland, Italy, Thailand, and Singapore. As Berkman Klein Center Assistant Research Director Ryan Budish recently described over the summer, this dialogue series gave us the opportunity to speak directly with policymakers and institutions across the world to learn about the unique challenges of AI governance that they were facing within their countries and regions and how they are working to address them, with the ultimate goal of distilling key learnings on how to promote AI for the social good.

From our consultations, we observed three key challenges to realizing a globally beneficial AI future:

1. How do we encourage inclusion and discourage exclusion at each stage of technical development for AI systems?

The promise of AI-based technologies is enormous, with benefits ranging from efficiency gains in manufacturing to unprecedented improvements in healthcare; However, many participants from our conversations across geographic regions emphasized the concern

that AI as it’s currently being developed and deployed primarily benefits a select few over, or at the expense of, other and often more vulnerable populations. This inequity presents challenges and potential risks that are equally if not more staggering as the promises and must be addressed.

Of particular concern is uneven access to the benefits of and greater exposure to harms of AI and related technologies on countries in the Global South and already marginalized populations. AI technologies that are beneficial at the aggregate level can often have unintended negative impacts on groups including urban and rural poor communities, women, youth, incarcerated or formerly incarcerated individuals, the LGBTQ community, ethnic and racial groups — and particularly those at the intersection of these marginalized identities. Despite a general awareness of these inclusion risks across the globe, regions and jurisdictions differ in their understanding of the negative consequences of AI from a social justice perspective. The prevailing narrative is more often than not one of optimism and innovation, without any of the requisite skepticism to balance it out.

2. How can we close the information gaps between AI specialists and the broader public who experience the impacts of AI on the ground?

Our conversations have also revealed that there are at least two major forms of information asymmetry at play in the AI sphere from a global inclusion perspective. First, there is an asymmetry of knowledge and resources between those developing AI and those using it (or impacted by its use). Keeping pace with AI’s rapid developments requires technical expertise, time, and financial resources, and is helped by proximity to the companies developing AI technologies. This privileges the organizations and governments that are concentrated in pockets with greater resources, primarily globally influential entities along the US coasts and in Western Europe. This knowledge — and its concentration — increases in parallel with the growth of AI and its capabilities, making it difficult to close the information gap in real time. This concentration also creates a significant blindspot, where those developing AI technologies are often unaware of existing and potential adverse impacts on those in other parts of the world. By bridging this form of asymmetry, knowledge is transferred bi-directionally, and can help promote greater foresight of global risks.

Second, there is an asymmetry of access to AI-based technologies and their benefits, especially for marginalized populations. This form of asymmetry is one that leads to an unequal distribution of the benefits of AI, running counter to basic notions of social justice. For example, autonomous vehicles, which rely heavily on maps, are unable to drive through traditionally unmapped communities such as the favelas of Rio de Janeiro. Thus, these communities miss out on potential benefits from AVs. Simultaneously, some of the negative consequences of AI deployment are left with exactly those who benefit the least and are routinely left behind in the context of technological development, such as automation of labor, which typically happens first and predominantly in low-wage, manual jobs.

3. How can we equip policymakers with the right tools and knowledge to create an environment that encourages innovation, while discouraging adverse consequences?

Policymakers face difficult problems and choices when addressing AI systems, particularly as it remains unclear what the actual impact of AI-based technologies will be on the economy and society at large. The technical complexity of the technologies, combined with a lack of sound social impact metrics, and the pervasiveness of AI in all corners of society makes answering questions about costs and benefits, and scale of impact much more difficult to answer, particularly for policy experts without a strong grounding in the technology.

Moreover, the discussion about the normative implications and tradeoffs of AI remains at an early stage in many different areas. Often, the ethical and policy questions emerge slowly, well after the rapid proliferation of AI technologies in everyday life. This inequitable dynamic is exacerbated by policymakers feeling as though they must choose between a (false) dichotomy of either innovation or risk-prevention. In cases where there is consensus on values and policy objectives, it is often unclear which policy instruments are best suited for addressing the varying challenges within and across the broad range of contexts in which AI systems are deployed.

These three basic questions have emerged again and again around the world in our conversations about the governance of AI. Based on the goals of the Ethics and Governance of AI Initiative, which at the highest level seeks to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way that advances social values of fairness, human autonomy, and justice, we turned to three intersectional approaches aimed at addressing the challenges. These approaches do not comprehensively address every aspect of the questions described above, but cover important aspects where we believed we could make a substantial contribution.

1. Encouraging inclusion by connecting stakeholders and marginalized voices working on the development, deployment, use, and evaluation of ethical AI

The creation of a network of inclusive stakeholders, with an emphasis on marginalized voices, to be an active part of the entire AI lifecycle from development to evaluation can help ensure that diverse voices and views are considered at every stage. In prioritizing inclusivity and internationalization via a sustained, strategic, and networked approach, we’ve sought to develop consensus around AI’s potential consequences, uplift the work of practitioners from the Global South and other groups underrepresented in the AI sphere, and foster global dialogue and collaboration to reach collective solutions. This network cannot be constructed overnight, and has entailed ongoing work with partners around the world committed to ensuring that AI benefits all people, not just a select few.

The first culmination of these efforts was the Global Symposium on AI and Inclusion, which sought to provide a space and platform for a diverse set of global stakeholders working on issues of AI and inclusion. The symposium convened over 170 individuals from more than 40 countries to identify and explore the complex set of issues that exist concerning the geographic divide between the Global North and the Global South when it comes to the development, design, and application of AI-based technologies.

This group helped us to build out a roadmap for those working at the intersections of AI, ethics, governance, and inclusion. We’ve since used this roadmap to guide contributions to the IDRC’s AI and Human Development Report and our AI education programming in Latin America. Almost exactly one year after the symposium and the creation of the roadmap, we are now in the early stages of building an Inclusion Fund together with the IDRC and the Global Network of Centers (NoC) that seeks to empower Global South collaborators with resources and support to help mitigate the risks of AI and unlock its potential.

These are just a few examples among many (including having worked closely with Global South institutions such as the Digital Asia Hub and ITS Rio) of how we’ve worked directly with global partners to promote more inclusive AI development and use. Our belief is that with every new node that is added to the expanding international network, the potential for social innovation in AI increases and helps set the stage for a more united and global effort to advance inclusive AI.

2. Bolstering support of international AI centers and academic networks to build additional capacity and foster knowledge sharing across geographies

In order to bridge the information asymmetries between AI specialists and the broader public, our work must go beyond private convenings and internal reports; what’s needed are institutions dedicated to advancing ethical AI. Incubating global entities engaged with the ethics and governance of AI works to bridge both forms of information asymmetry: asymmetry of knowledge and asymmetry of access. In addressing the first kind of asymmetry, our work builds up capacity, increases resources, and transfers institutional knowledge. In addressing the second form of asymmetry, our work encourages greater access to AI and knowledge of its impacts for the public by supporting new institutions that will research or work with new demographics and tailor efforts accordingly, promoting the growth of centers that will support governance models for the public good, and engendering public knowledge through new endeavors such as convenings and educational materials developed and steered by local institutions.

To that end, we’ve played a primary role in helping to incubate two newly-minted centers focused on studying the ethical and regulatory implications of emerging technologies including AI: the Singapore Management University (SMU) School of Law’s Center for AI & Data Governance, and the new branch of the Digital Asia Hub (DAH) in Bangkok as part of the expanding Global Network of Internet & Society Centers.

Our work with SMU began with advising and supporting the team leading up to the a government-funded grant to study AI. The collaboration evolved into a joint workshop to formally launch the center, and is entering a next phase of working on collaborative projects analyzing AI governance globally and regionally, with a specific focus on Singapore’s risk-management. The end goal of this joint endeavor is to provide an operationalized example of an AI strategy that can inform regulatory approaches in other parts of the world.

Our work with the DAH extends to the Hub’s launch two years ago. The DAH has become a key center of the NoC, as well as a grantee of the Ethics and Governance of AI Fund. We’ve engaged with DAH on a number of fronts, co-hosting events such as the APAC-US AI Workshop held in Hong Kong as part of our Global AI Dialogue Series, the AI in Asia workshop on Ethics, Safety and Societal Impact, and the Disinformation Summer School following a four-day Disinformation and Discourse conference. DAH launched a new multi-stakeholder branch based in Bangkok in August 2018 to further its research on the impacts of AI in the lower-income areas of the Asia Pacific Region. We helped DAH to launch this new Bangkok branch, which is focusing on the interplay of digital technologies with the economy and society in Thailand and Southeast Asia.

In addition to helping incubate new centers, we’ve augmented the capacity and resources of existing institutions by sharing education materials, holding trainings, and supporting local scholarship. One such effort involved creating and circulating open access youth-oriented learning materials about relevant areas of youth life associated with the digital world to educators seeking to address digital literacy among youth. These education resources have reached over 350,000 youth and educators from India to Nigeria via partner institutions such as the Internet and Mobile Association of India and the Co-Creation Hub Nigeria.

These new centers will serve as vital junctions in a growing regional and international network of institutions working on the ethics and governance of AI in the Global South, which efforts such as the open access youth education materials help to further support. In carrying out this capacity building, we actively engaged in conversations about the inclusive development and use of AI both at home as well as internationally in order to bridge information gaps across geographies and among a diverse set of stakeholders (including the general public). To this end, we will continue to further our support of similar entities and efforts, and to building out work with the centers we’ve already connected with.

3. Filling in information gaps for policymakers around the world by providing tailored research, guides, and networks to help balance AI’s benefits and risks

AI has significant potential for tremendous societal benefits, but only so long as risks can be managed and mitigated through informed, evidence-based decision making. However, what we heard from decision-makers around that world was that the more complex the technology, the more challenging it can be for policymakers and technical experts to predict the impacts of that technology and the potential ramifications of policy interventions. This challenge is particularly true for AI, where the complexity can create challenges for even the most technically adept.

In order to address this challenge, we began working with policymakers to bridge those gaps by offering concrete steps, frameworks, and tools that can help regulators as they begin to make decisions about AI governance. The grounding of our recommendations lied in key takeaways garnered from the learning tour, as well as from Berkman Klein’s past work on internet and tech policy work. In working with policymakers, these learnings helped us to begin clearing away the hype and uncertainty surrounding AI technologies and the fictitious tradeoff between innovation and risk prevention, point to where and how policy can be most useful, and work to set normative standards for sound AI impact metrics so that policies can be continually adapted based on AI’s societal imprint.

To do so, we’ve focused in on a few crucial research areas to help inform policymakers in making practical, evidence-based decisions, applying all we learned throughout the listening tour. Realizing the need for robust AI governance and ethics frameworks, we’ve engaged with public sector stakeholders at both regional and international levels.

Upon the invitation of German Chancellor Angela Merkel we engaged with the German Federal Digital Council, on which Berkman Klein Center Executive Director Urs Gasser serves as a member. The AI global governance and inclusion team, supported by NoC experts, recently reviewed Germany’s provisional Keypoints AI Document, which outlined the main elements of the German national AI strategy, and provided extensive feedback to the Council. Many of the inputs have been incorporated in the recently finalized and announced German AI strategy.

At an international level, we are offering expert support to the United Nations as it prepares its draft action plan for the UN High-Level Committee on Programmes (HLCP). This Action Plan, following the model that the UN has used to address issues such as gender equality and cybersecurity, is intended to offer a blueprint for UN agencies as they consider how to deploy AI technologies within their own agencies. Based on a first draft we developed, the Action Plan helps UN agencies identify areas for capacity development, including human capacity and infrastructure, that can enable UN agencies to utilize AI in ways that advance the sustainable development goals. Our ongoing efforts with the UN HLCP follows work conducted with the UN International Telecommunications Union (ITU) on a module that helps set the stage for AI governance. The discussion paper both utilizes existing frameworks for governing emerging technologies, and posits new ones, to help position decision makers around the world to better understand and address the challenges that AI poses.

In working with a range of global policymakers such as the ITU, we have used our platform and position of privilege as a means to help close information gaps, augment the inclusivity of perspectives across AI design, development, deployment, and use, and ultimately serve the public interest. Particularly given the unique role of universities as hubs for global perspectives, the production of knowledge, and leadership training, under the umbrella mission of promoting the public good we see an opportunity, and in fact an obligation, to play a leadership role in this field by helping policymakers address some of the most pressing and vexing technical and societal challenges of our day. What we’ve learned over 20 years of studying the impact of digital technologies is serving as one model for helping to mitigate challenges related to AI globally before they occur.

Next Steps across the three impact areas and beyond

Moving forward, we intend to build up the connections made, further existing work, and embark on new endeavors along the triad of impact areas outlined above. Across these three channels, our work for the coming months is centered around the following priority areas:

  • Exploring risk-based approaches to AI governance
  • Developing operationalizable guides for policymakers making decisions on the regulation of AI
  • Helping to build out an AI & Inclusion Fund to leverage Global South collaborators
  • Tracking sound and measurable AI social impact metrics

The resulting outputs from these activities are intended to be standalone products and to serve as contributions to a virtual policy playbook. This “playbook” is intended to synthesize impact-oriented materials created throughout the Global Governance and Inclusion track, including our work on risk-management models, the UN AI Action Plan, and our work on tracking social impact metrics, in order to provide policymakers globally with consolidated resources for operationalizing effective and inclusion-oriented AI governance models.

By doing so, and through the combined efforts detailed above that we’re prioritizing moving forward, our goal is to help deter biased and exclusionary impacts of AI — such as the example of Google translate stereotypically gendering professions from gender-neutral Turkish to English — and rather encourage the development, deployment, and use of AI-based technologies that yield inclusive and globally-aware outcomes the first time around.

We are seeking collaborators, participants, researchers, and other colleagues working on the Ethics & Governance of AI, particularly from the Global South and underrepresented communities to further global AI work for the social good. Please reach out to us via email jsherman@cyber.harvard.edu

You can also join in or keep up with our work by:

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.