10 learnings from considering AI Ethics through global perspectives

By Sampriti Saxena and Stefaan G. Verhulst

Photo: Unsplash/Nastya Dulhiier is licensed under CC0

Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. This free, online course is designed for a global audience with the goal of raising awareness of the societal impacts of AI and associated technologies. Capturing the breadth and depth of the ongoing interdisciplinary discourse on the ethical implications of data and artificial intelligence, the course gives individuals and institutions a broader understanding of how to pursue responsible AI use.

“AI will not fly without ethics.” — Christoph Lütge, Course Lead

A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework

Broaden the Conversation

Each of the course faculty has echoed this sentiment in one way or another, either through the respective modules or during our Global Perspectives on AI Ethics panel discussions hosted by the GovLab. The key insight is that, if we are to build fair and sustainable AI systems, then the AI ethics discourse needs to expand in order to bring a wider range of perspectives and expertise from around the world. While platforms like this course create spaces for the necessary knowledge exchanges, the ecosystem, and dominant discourse will also need to broaden to incorporate multi-disciplinary, multi-level learnings and perspectives. This is especially important when considering the increasingly diverse applications of AI technologies in the field.

The Public as a Stakeholder

As both a beneficiary and end-user of AI technologies, the public ought to be at the center of the AI ecosystem. However, currently, the public is not sufficiently involved in conversations around the ethical impacts of these technologies. In order to bridge the divide between the public and other actors in the ecosystem, it is important to raise public awareness of the benefits and risks of AI technologies. Through public education initiatives and public consultations, we can bridge information asymmetries and close the digital divide to empower the public to become active stakeholders in conversations around a technology that has the potential to transform our world.

Centering Diversity and Inclusion in Ethics

Given the near-endless uses of AI technologies across diverse contexts and use cases, it is critical that conversations around ethical AI reflect this diversity as well. It is therefore important to move away from a one-size-fits-all model of AI ethics, and instead embrace the ecosystem’s diversity by bringing more perspectives across cultures and industries to the conversation. This will not only help broaden the conversation but will also lead to more accessible systems in the long run.

Building Effective Systems of Accountability

Our faculty members are unanimous in their call for more effective systems of accountability. If AI technologies are to be ethically adopted at scale, a proactive (rather than a reactive) approach to regulating these technologies is preferred. Most lecturers advocate for a combination of frameworks and legislative changes that can transform principles into policies and practices. For systems of accountability to keep pace with the ever-changing technology ecosystem, it is important that they implement near-continuous assessments of not only the ecosystem but also their internal structures. Our faculty also note that city governments have been remarkably successful in implementing dynamic and effective AI governance systems, and could serve as a model for larger government bodies going forward.

Establishing Trust

As AI becomes increasingly pervasive, trust in the underlying technologies and their governance systems will become more important. In order for AI to be implemented in a responsible and effective manner, the many stakeholders in this ecosystem will need to be able to trust one another and have a degree of trust in the overarching governance systems as well. Moreover, if AI is to be adopted at scale, the public must also be able to trust that the use of AI technologies is ethical and responsible. Public engagement, transparency measures, and effective governance systems are therefore essential elements in building trust.

Ask the Right Questions

Although there may be no such thing as a bad question, the value of asking the right questions is immeasurable. In the context of AI ethics, framing and asking the right questions can lead to critical insights into the design and evaluation frameworks for AI technologies. Moreover, asking questions can help define terms and common purposes to mitigate the risks of misuse and bridge gaps in understanding.

The Role of Independent Research

Currently, funding for AI and technology ethics research is largely dominated by the private sector. While research trends in the ecosystem are positive, showing a growing interest in ethical questions around AI, independent and public research needs to be supported for maximum public benefit. A considerable amount of public research is more accessible to a global audience and can provide invaluable insights to countries around the world that are just beginning to build their ethical AI frameworks and governance models. Moreover, independent research brings diverse perspectives to the discourse, helping to broaden conversations and sustain a more inclusive discussion around AI ethics.

Humans at the Center

The challenge of AI and data ethics is a social, political, cultural, philosophical, and technological one. Technological advancements offer some solutions, but generally, technology is not a panacea for ethical challenges. Interdisciplinary approaches to mitigating these problems offer more effective and sustainable solutions by addressing the myriad of factors at play. Philosophical approaches are especially important, as they center on humans rather than machines in the search for solutions.

Our Shared Responsibility

In conversations about AI and data ethics, the burden of being alert to, and diagnosing the risks of, these technologies sometimes appears to fall solely on individual users and their representatives. A collective approach to risk assessment and subsequent mitigation is more required. Rather than just the individual, the many stakeholders in the ecosystem could also share in the responsibility of upholding fair and ethical practices in their work to address risks and ethical concerns. This shared responsibility would also require a shared awareness of risks, fighting the ethical blindspots in AI technologies, to ensure that stakeholders at all levels are aware of the risks and how they can be addressed.

The Challenge and Potential for a Global Framework

While our faculty agree that a global framework would be an important step in implementing ethical AI technologies, they also find that the borderless nature of technology and the diversity of the field pose challenges to building effective governance frameworks. In the present context, collaboration and shared knowledge are going to be important in first establishing an inclusive global community before we can move towards global regulatory frameworks. Establishing a common language around AI and sharing best practices across contexts will go a long way toward bridging gaps to help level the playing field.

As AI technologies become increasingly common, ethical considerations will no doubt become more important as well. If we wish to avoid transposing inequalities from the analog world to the digital one, then we will need a robust and balanced AI ethics discourse. Through the AI Ethics: Global Perspectives course, we have created a platform to facilitate this discourse by engaging the many perspectives that exist in the ecosystem. We will continue to bring together voices from around the world and across industries to foster these important conversations around ethics and AI.

Learn more about the course and our latest modules at aiethicscourse.org.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store