A panel of five responds to Ansaf Salleb-Aouissi’s keynote speech on the history of AI and building a more inclusive society from a technical perspective.

Charting a Roadmap to Ensure Artificial Intelligence (AI) Benefits All

Berkman Klein Center
Berkman Klein Center Collection
15 min readNov 30, 2017

--

Berkman Klein Center and the Institute for Technology and Society (ITS) Rio convene international symposium aimed at building capacity and exploring ideas for data democratization and inclusion in the age of AI.

By David Talbot, Levin Kim, Elena Goldstein, and Jenna Sherman (with photos by Bruno de Castro)

AI-based technologies — and the vast datasets that power them — are reshaping a broad range of sectors of the economy and are increasingly affecting the ways in which we live our lives. But to date these systems remain largely the province of a few large companies and powerful nations, raising concerns over how they might exacerbate inequalities and perpetuate bias against underserved and underrepresented populations.

In early November, on behalf of a global group of Internet research centers known as the Global Network of Internet & Society Centers (NoC) , the Institute for Technology & Society of Rio de Janeiro and the Berkman Klein Center for Internet & Society at Harvard University co-organized a three-day symposium on these topics in Brazil. The event brought together representatives from academia, advocacy groups, philanthropies, media, policy, and industry from more than 20 nations to start identifying and implementing ways to make the class of technologies broadly termed “AI” more inclusive.

The symposium — attended by about 170 people from countries including Nigeria, Uganda, South Africa, Kenya, Egypt, India, Japan, Turkey, and numerous Latin American and European nations — was intended to build collaborative partnerships and identify research questions as well as action items. These may include efforts to draft a human rights or regulatory framework for AI; define ways to democratize data access and audit algorithms and review their effects; and commit to designing and deploying AI that incorporates the perspectives of traditionally underserved and underrepresented groups, which include urban and rural poor communities, women, youth, LGBTQ individuals, ethnic and racial groups, and people with disabilities.

Carlos Affonso, director of ITS Rio, addresses attendees from the stage.

From a global inclusion perspective, the long term goal is to ensure that these groups and residents of countries with emerging economies are not relegated to being passive participants and users of AI-based technologies developed in rich nations, but can actively influence AI development, informing technologies coming out of western and tech-savvy nations, particularly the United States and China, while advancing AI research and development to address problems relevant to their regions, communities, and cultures. “It is easy to see the challenges and not see the opportunities,” said Carlos Affonso Souza, a professor of law at Universidade do Estado do Rio de Janeiro, a director of ITS Rio. “We want them to become protagonists in the generation and development of technology and AI.”

The event was supported by the Ethics and Governance of Artificial Intelligence Fund, the International Development Research Center (IDRC), and the Open Society Foundations, in collaboration with Rio de Janeiro’s Museum of Tomorrow, whose neofuturistic structure — built on a pier jutting into Guanabara Bay before the 2016 Olympics — served as the event venue. Rio was itself a fitting setting to discuss AI and associated physical and digital divides: “a city where exorbitant wealth and destitution have long coexisted in stark contrast,” as the New York Times recently put it while chronicling the city’s worsening crime problem.

This report provides a glimpse of the Rio event and a sampling of topics discussed. Additional material representing the event’s topics can be found here:

AI History and Challenges

The symposium spanned three days: Day One centered around building a shared understanding of the complex concepts of both AI and inclusion and points of intersection; Day Two built on the conceptual foundation laid during the first day’s conversations by identifying specific opportunities, challenges, or solutions; Day Three’s emphasis was on translating the previous two days of discussion into the beginning of an action plan. The symposium also facilitated small-group discussions on more specific topics such as law and governance, design, data and infrastructure, and business models. While most of the event was geared to the attendees from around the world and conducted in English, the symposium also included a public session attended by nearly 400 people, during which different topics (e.g. reimagining a tomorrow, principles of AI, AI and creativity, challenges and opportunities related to AI, youth and the live of tomorrow) were discussed in English and Portuguese.

Participants Chinmayi Arun, Rehema Baguma, Jennie Bernstein, Nnenna Nwakanma, and Kyung-Sin Park speak on advancing equality in the Global South.

A common theme throughout the three days was the importance of including the voices and perspectives of underserved communities, such as young people, different ethnic and racial communities, LGBTQ, and people with lower skill and educational levels. People that contributed information for the visualization also suggested that the most important goals of participants all involved establishing meaningful global collaborations. (There’s a great need in many regions; for example, Laura Nathalie Hernandez, an attendee who is a PhD candidate from El Salvador, says that in her country the challenges and opportunities of AI and the social effects of technology are not well examined by local institutions, and visas are expensive and difficult for Salvadorans to obtain.)

Early in the event, Ansaf Salleb-Aouissi, a lecturer in computer science at Columbia University, provided a brief history of AI and machine learning and mentioned major applications: recommendation engines for online social network platforms and advertising; facial detection and recognition; speech recognition. (AI systems also are used to power risk-assessment tools used by banks, insurance companies and even by a growing number of criminal court systems as aids in bail, parole, and sentencing decisions). Under machine learning (ML), computers learn by experience and through the addition of more data. But the underlying datasets are the products of human decisions and curation and can include human biases that later show up in the outputs of AI systems. She provided an alliterative guide to major approaches to making AI more “inclusive:” Develop: empower individuals with AI education; Decipher: provide explanation through understandable models; De-identify: make sure that data is used in ways that protect people’s privacy (including categories such as by race): DeBias: work to ensure fairness and avoid digital discrimination.

Mimi Onuoha, artist and researcher at the Eyebeam Art and Technology Center moderates a discussion on the design of inclusive algorithms.

Some speakers presented examples in which AI and algorithms had offered seemingly obvious biases and unfairness. Lucas Santana, a digital marketing strategist who is director of contexts at Desabafo Social, a nonprofit group based in Salvador, Brazil, which seeks to advance human rights education, described how a recent search for “black baby” on Shutterstock produced pictures of black babies, but a search for just “baby” produced mostly images of white infants, a problem documented here. In a nation like Brazil, which is 64 percent black, such outputs are even more starkly biased than they are in the United States, where Shutterstock is based. Such problems afflict search algorithms and online technologies generally; they reflect the overrepresentation of white, western and wealthy perspectives of content creators, who are also the ones who decide how how images or videos are labeled (with words) for later detection by search tools.

This and other examples show how biases are embedded in the systems that power our online discourse, searches, recommendations, and advertising. (Datasets behind algorithms used to guide offline decisions can pose similar challenges; many of these issues were aired at this Berkman Klein Center convening. The specific issues of their use in the criminal justice system is an early Berkman Klein Center focus, as discussed at this recent event at Harvard Law School.) Speaking at the Rio event, Alison Gillwald, executive director of Research ICT Africa, a regional communication and Internet policy think tank based in Cape Town, South Africa, said that in many ways it was not surprising that these sorts of problems were occurring. “It’s sort of as if we expect certain things to be happening online when they are not even happening offline — human rights, connectivity, access, open government,” she said. Given the scale and speed of AI’s advance, it may not only mirror but exacerbate the structural inequalities that exist globally and locally.

Theoretical Framing and Key Questions

Keynote speaker Nishant Shah, Dean of Research at ArtEZ University of the Arts speaks on inclusion in the age of AI.

The event included discussion of theoretical framings to help the research community think about the problems of AI and inclusion. Nishant Shah, a professor at Leuphana University in Lüneburg, Germany — and co-founder and director of research at the Center for Internet and Society in Bangalore — offered a way to frame the overall issue. He started by showing a slide of a T-shirt advertised on Amazon that said: “Keep Calm and Rape a Lot.” The startlingly offensive shirt was entirely created by an automated algorithm designed to add verbs to riff on “Keep Calm and Carry On” (from a British wartime poster) and offer them for sale without human involvement. The person selling the shirts had not intended to be misogynistic, nor had the algorithm been consciously designed to produce such an offensive product. Clearly, Amazon had not done an adequate job designing and overseeing its systems — but as Shah pointed out, this problem was more fundamentally rooted in a hybrid of human and machine interactions. Abstracting on this idea, Shah urged academics to consider AI and inclusion as interrelated fields. It would be insufficient to merely think about AI as a distinct entity from humans, people who are separate actors that may or may not succeed in making AI ‘inclusive.’ Rather, Shah argues for an idea of ‘inclusive intelligence’ that considers AI research and inclusion politics in an integrated way.

Against the backdrop of this theoretical foundation, a number of research questions and action items were also discussed at the event. Felipe Estefan, who manages Latin American investments for the Governance & Citizen Engagement Initiative at Omidyar Network, outlined a Top 10 list of things to be talking about by members of the research and business communities. For starters, Estefan said that we should not equate the ease of doing business brought by technological innovation with inclusive economic development practices. He added that given the unequal distribution of AI’s projected economic impact, the needs of underrepresented populations must be priorities (as opposed to mere edge cases) in the development of AI. To this point, he also said we must incentivize companies that hold significant amounts of data to use it for the public good. He built on this point adding that there is a clear need to ensure that greater ethical considerations and better governance structures are build into AI and algorithms to minimize unintended negative impacts and bias.

ITS director Ronaldo Lemos addresses the 350 attendees present at the public event held at the end of Day 2.

Top 10 Things to be Talking About Surrounding AI & Inclusive Economic Development

Presented by Felipe Estefan (Omidyar Network)

  1. We should not directly equate the ease of doing business that AI may provide with inclusive economic development.
  2. The economic impact of the data and AI revolutions is not equally distributed by default. We must work to address these inequalities.
  3. The greatest economic opportunities of tomorrow require not just enhancing yesterday but redefining it. AI isn’t solely about optimizing what we did in the past but thinking of what else we can do in the future.
  4. If we pit the drivers of AI against those who would most benefit from it, we will do so at the broader detriment of society. How can we change incentives to avoid this issue?
  5. The current data governance structures benefit those in power. It is those with the resources and ability to leverage AI and data who are most likely to benefit as a result of the increasing pervasiveness of these innovations. How can we introduce greater ethical considerations and better governance into AI and algorithms in other to root out negative consequences and bias?
  6. The future of work needs to look brighter and more inclusive than it does today. As long as a large portion of the population feels like the opportunities and technologies of tomorrow are beyond their reach, we will have failed to fully realize the positive potential of these innovations.
  7. If AI exacerbates, rather than minimizes, the inequities existing in society today, the results could be catastrophic. How can we take a proactive approach to ensuring that AI functions for the public good?
  8. We live in a time in which the failures of unpopular and ineffective political leaders are causing people to question the broader processes and institutions that protect values of good governance and equality. How can AI be applied ethically in a manner in which it can help rebuild the trust between citizens and democratic institutions?
  9. A future in which AI is applied ethically requires collaboration across sectors and stakeholder groups. As such, how can we proactively design collaborative strategies for the application of AI in the pursuit of broader societal benefits?
  10. If we want to ensure that AI does not exacerbate existing power imbalances, it must be made more accessible. How can we redefine the story of AI? How can we de-mystify AI in order to restructure the asymmetry of power?

Research and Action Ideas

Marc Surman, Executive Director of the Mozilla Foundation, presents a case study on the need to build data commons, data coops, and a large-scale AI & Inclusion social movement.

A number of ideas for action and research to address challenges and pursue opportunities were discussed. A central idea is the need to democratize access to existing data repositories and create new open-source datasets. Mark Surman, executive director of the Mozilla Foundation — the nonprofit devoted to internet health — likened the current technology landscape to a form of “colonialism” in that a small number of companies use AI to extract massive amounts of personal data from the rest of us.

Mozilla and the open internet movement are trying to overturn this colonialism — to make the web a shared resource. But it’s not as simple as putting AI-related software into the public domain, because it’s often the datasets, and not the software, that are enabling these powerful few companies. And those datasets are often privately held. “We don’t have the datasets and data infrastructure,” Surman said. “We need experiments in data commons and data coops.” One existing effort at doing this is Mozilla’s Project Common Voice, an effort to build open-source voice datasets for use in voice-recognition systems.

Perhaps even more important: new and better data sources are needed, particularly in emerging economies, said Nagla Rizk, professor of economics director of the Access to Knowledge for Development Center (A2K4D) at the American University of Cairo. At the symposium she made the point that in many parts of the world, data made available by government agencies is frequently generated and released to serve political ends. Funders and academics should focus on “the concept of developing data ourselves, from the ground up,” she said. “The data has to be accurate and presented for what it really is. And if this can be offered as open data, it can be used by and benefit everyone.”

ITS director Ronaldo Lemos speaks at the public event on possible future trajectories of Artificial Intelligence.

Part of the solution can be to draft public-spirited goals for AI and adopt a positive frame of mind for the technology’s potential. Arisma Ema, an assistant professor at the Komaba Organization for Educational Excellence University in Tokyo, brought this more optimistic perspective from Japan, a developed nation with a strong culture supporting robotics. The Japanese Society for Artificial Intelligence has come up with ethical guidelines focused on adding fairness and benefits to society, complying with laws and regulations, and promoting peace and security. She said ethical approaches to design and use of AI can emerge from bottom up and in a collaborative fashion.

Additionally, some speakers said it was important to be inclusive in how any new laws or regulations governing AI are adopted. Three years ago Brazil’s president signed into law something called the Marco Civil, an Internet bill of rights that seeks to protect freedoms in the digital age. The law was the brainchild of Ronaldo Lemos and collaborators. The Marco Civil “can be a good inspiration for [regulating] artificial intelligence especially because the process in which it was built was a collaborative process. That is the best way to build public policy regarding technology, when all sectors in society actually have a say in the outcome of regulation,” Lemos said.

The Education & Learning breakout group convenes in the courtyard of the Museum of Tomorrow.

Building the Developer Community and Addressing the Funding Gap

The problem isn’t only AI technology or access to such technology. Participants also discussed the need to improve education in several ways in order to provide fresh ranks of AI developers who are diverse, technically savvy, and mindful of local needs. For example, Kathleen Siminyu, a data scientist at Africa’s Talking, a Kenyan company that provides web APIs that developers can use to access telecommunications functions, explained that girls in Kenya are frequently told to pursue higher education in languages rather than engineering, and often go through high school without ever taking physics or other prerequisites for college engineering degrees.

Once at college, students need improved training in the ethical side of technical fields, explained Victor Akinwande, a software engineer at IBM Research in Rwanda. While computer scientists tend to regard “bias” as an issue of accuracy that needs to be debugged, social scientists tend to regard it as a more complex idea that involves many moving pieces; this more robust understanding of bias is needed in university-level engineering curricula, he said. When engineers graduate, funding will be required to help them actually go on to develop local solutions. Siminyu discussed the story of a peer who is working on natural language processing programs for Swahili, which is spoken by at least 50 million people worldwide and is the national language in Tanzania, Kenya, and the Democratic Republic of Congo. Siminyu said that despite the obvious importance of Swahili, her colleague is unable to pursue Swahili technology development full-time for lack of funds.

Chinmayi Arun, research director at the Center for Communication Governance and assistant professor of law at National Law University in Delhi, moderates the deep dive plenary session on advancing equality in the Global South on Day 1 of the symposium.

Chinmayi Arun, research director at the Center for Communication Governance and assistant professor of law at National Law University in Delhi, said that the only way to effectively solve issues related to AI and inclusion in lower income countries is to build local expertise and capacity. “We have worked on this before in the context of Internet governance,” she said. “When you fund centers in the global south, you are not just parachuting in expertise, but helping the global south build something out that is consistent with their point of view.”

Next Steps

As the symposium drew to a close, participants were encouraged to identify actionable next steps. Cluster groups — consisting of about 10 participants each who met each day for small group discussions — highlighted the need to start mapping conferences and key players related to AI and inclusion, particularly those coming from the Global South. The symposium provided a rich set of research questions and action items that could expand existing networks and inspire new collaborations. Urs Gasser, who as executive director of the Berkman Klein Center co-led the international team that organized the conference , called attention to four key questions: (1) How do we incorporate the perspectives of those who cannot participate in AI development and dialogue? (2) What are appropriate oversight mechanisms and how can they be implemented to empower people around the world? (3) To what extent should we be looking for technical solutions for social problems? (4) When it comes to inclusion, do we prioritize an individual or ecosystem-driven approach? Gasser has previously written on the role of law in AI.

Colin Maclay, Executive Director for the USC Annenberg Innovation Lab, moderates the “report back” session where small group moderators shared 2–3 key findings from their sessions.

The Global Symposium on AI & Inclusion demonstrated strong interest in identifying and engaging more critically with the issues of AI and inclusion around the world. As many participants pointed out, it will be necessary to build on the momentum of networks developed at this event and find ways to make progress on the major research questions and action items discussed. The ideas and insights from the Rio event will directly feed into an action-oriented agenda of the NoC, which is comprised of more than 80 organizations around the globe. Furthermore, different institutions who attended the symposium will be co-hosting in January another symposium focussing primarily on how young people interact with and are impacted by AI in areas such as as education, health and wellbeing, entertainment, and the future of work.

For more information about the event, contact the Berkman Klein Center’s Youth and Media project director, Sandra Cortesi.

The ITS Rio and Berkman Klein Center organizers join together on stage during the final remarks to close out a successful event.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.