Leveraging Diverse Philosophies to Build a Global AI Ethics Discourse

Sampriti Saxena
Data Stewards Network
8 min readJun 17, 2022

Global Perspectives on AI Ethics Panel #9

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

The ninth installment of Global Perspectives on AI Ethics was held on Friday, June 3rd. This month, we were joined by the following distinguished faculty for an insightful discussion on the role of diverse philosophies in building a global AI ethics discourse:

  • Favour Borokini, Data and Digital Rights Researcher at Pollicy, Kampala, Uganda
  • Jibu Elias, Content and Research Head at INDIAai, New Delhi, India
  • Tannya Jajal, Regional Resource Manager at VMware, Dubai, UAE

The discussion was moderated by course leads Julia Stoyanovich, Associate Professor and Director of the Center for Responsible AI at NYU, and Stefaan Verhulst, Co-Founder and Chief R&D Officer of The GovLab. In an hour-long discussion, interspersed with questions from the moderators and the audience, the panelists delved into a range of topics spanning from different philosophical traditions informing global ethics to the commodification of responsible AI practices, and more.

Human rights, intersectionality and the importance of critical approaches to ethics

To kick off the conversation, Julia invited Favour Borokini to elaborate on her module “AI and Nigeria: Exploring Threats to Human Rights”.

Favour began by addressing the question of how systems of accountability are evolving to address human rights issues in the AI ecosystem. In response, she briefly summarized the different definitions of accountability before explaining shifting perspectives in the ecosystem. In the past few years alone, there have been over 100 new guidelines and principles on ethical and responsible AI practices published around the world. This, Favour pointed out, signals a growing understanding of the need for regulation in the ecosystem, as well as a preference for the flexibility offered by ethical frameworks. While legal regulatory mechanisms may be more effective, she made the case that often they can be more rigid and harder to implement in practice.

Another central idea from her module was intersectionality. In talking about the importance of intersectionality in designing frameworks for AI ethics, Favour said:

“Intersectionality is a framework that helps everybody understand how various aspects of a person’s identity could impact them and impact the way they are viewed, and therefore, affected by laws, legislations, and definitely technologies like AI…intersecting identities of people place them at a unique axis of oppression that needs original solutions to be addressed.”

Building on the concept of intersectionality and different African ethical frameworks, Julia asked Favour to elaborate on some of the philosophies informing these frameworks and their applicability in the field. Looking at the example of Ubuntu, a communal-based philosophy from Eastern and Southern Africa, Favour highlighted the importance of taking a critical approach when implementing these philosophies in practice. She agreed that philosophical frameworks can act as a good source of values but she also felt that they may need to be updated to more accurately reflect the realities of our societies today. For example, the communal focus of Ubuntu faces the risk of minimizing individuality in our new contexts. To avoid this, Favour argued that philosophical approaches should be applied dynamically in response to the on-the-ground realities of the context where they are being implemented.

Broadening the discourse

Following this discussion, Stefaan asked Jibu Elias about how we can broaden the current discourse on AI ethics to bring in different philosophical approaches and cultural nuances. In his experience, Jibu shared that he has found that the element of culture is noticeably absent in the current discourse. Since culture is often a major determinant of one’s worldview, he felt that culture ought to play a more central role in conversations around ethical AI. To elaborate on this idea, he used an example of social courtesies in India, and pointed out how the different actions of a guest can carry specific subtexts in this context, which vary depending on the cultural setting — the idea being that what is polite in India, may be disrespectful in another country. Applying this argument to the field of AI ethics, Jibu said, “Culture plays a critical role in how we perceive the world around us and definitely when it comes to AI, our ethical thoughts are built on top of culture as well.”

Shifting gears slightly, Jibu went on to discuss Eastern approaches to ethical AI. Although there is a tendency to approach Eastern cultures as one homogenous entity, Jibu has found that each country’s culture and their approach to AI varies greatly, and they cannot easily be clubbed together as one. In explaining these differences, Jibu said:

“There is this study that pointed out how you approach AI from a tool point of view or a partner point of view…and interestingly while Korea sits on the extreme left of the spectrum, Japan sits on the other extreme end. And this is reflected not just in the culture, it is reflected in the policy space, even in the regulations, even in the frameworks they are deploying. Now that’s a very simple example of why we need a cultural approach.”

To conclude, Jibu shared some strategies for broadening the current discourse around AI ethics. The first step, he believes, is in accepting that there are other ideas out there that could hold value in these discussions. He also felt that the discourse should expand to look at the diverse applications of AI and their respective challenges and stakeholders more from a social science perspective than a technological one. Finally, he pointed out that regional diversities are important to consider and that a ‘one size fits all’ approach to the discourse will not be sustainable in the long run.

Ethics in a post-AI, post-automation world

Last but not least, Julia turned the stage over to Tannya Jajal to talk about her work in building future fluencies and how she sees the role of ethics evolving in a post-automation world.

Tannya began by defining future fluencies in the education space. Her work is centered around building educational reforms to empower students with the skills and mindsets they will need to thrive in the future in light of the rise of technology. She explained that future fluencies are not siloed, rather they focus on multidisciplinary thinking and approaches to problem-solving. To elaborate further on this approach, Tannya said:

“For most of human history, we have focused on Newtonian, Cartesian ways of life, mind-body dualism, largely materialistic, largely focused on what are typically ‘masculine’ traits, which are very important and have a huge role to play in society and in technological innovation. But now we need to kind of bring back, we need to balance the weighing scale so to say, which means that we need to bring in some concepts around Eastern philosophy, specifically around the holistic aspect of Eastern philosophy when it comes to solving global grand challenges.”

At the center of building these future fluencies are principles of diversity and inclusivity. The hope, Tannya shared, is that by laying these principles down as a foundation, approaches to future technological innovation will consider them ‘no-brainers’, thereby increasing accessibility to technology for all. She pointed out that companies like Google and IBM are already beginning to invest in accessibility technologies, but mainstream investment has been slow to catch up. She predicts that we may see the widespread adoption of these technologies in the next 30 to 50 years, as new generations of innovators enter the market.

Like the principles of diversity and inclusivity, ethics will also play a key role in a post-AI, post-automation world according to Tannya. For her, a post-automation world is one where we are on an exponential trajectory towards the adoption of Artificial General Intelligence (AGI), and we are beginning to outsource cognitive functions to AI systems. In such a world, she finds that ethics will have an important role to play in ensuring that these systems do not pose a threat to humanity. In order to broaden the discourse and bring in voices from around the world, especially from emerging economies, Tannya argues that we must first level the playing field by addressing basic necessities. It is only once we are all in a position of relative stability that we can expand our focus to begin to tackle issues of ethics.

Next steps: mindfulness, inclusivity and values as key factors in building global ethical AI discourses

The discussion concluded with an engaging question and answer session, with audience members posing thought provoking questions to our panelists on topics ranging from holding corporations accountable to moral relativism to values of sustainability in the ecosystem. Each panelist shared key takeaways during the session.

In response to a question about ethical norms that transcend cultures, Jibu echoed Favour’s sentiments from earlier in the discussion, saying that it is important to be mindful of what you take and what you apply from different philosophical frameworks. While there are transcendent norms, we must not ignore the risk that exists of inadvertently perpetuating injustices through technology. The idea of AI ethics should be to use technology for good with the goal of decreasing pain and suffering.

In answering the question “Which principles would you like to see more discussed to widen the approaches and outlooks towards AI ethics?”, Favour highlighted the importance of bringing marginalized voices into the center of the discourse to bring about lasting positive change. She also pointed out the value of moral relativism, cultural sensitivity and open-mindedness in conversations around different approaches to ethical AI, as we work to broaden the discourse and build bridges between diverse cultures and communities around the world.

Tannya had the last word. She focused on the question of how we can teach values sustainably at a young age. She offered two important lessons from her work in the education space. The first is that we must move away from rote memorization as a teaching method, and instead move towards more project-based teaching and experiential learning methods, which have proven to be much more effective in the long run. Her second insight was to engage different philosophical worldviews, especially certain Eastern philosophies and mysticisms, for their relaxed sense of selfhood, thereby encouraging children to widen their sense of self to incorporate the environment and other people. Going forward, she felt that these values would be crucial in a post-AI world.

To watch this webinar and others, visit our course site here. We post new modules and other exciting content every month. To receive updates on the course, please sign up at http://bit.ly/ai-ethics-course.

--

--