The Many Aspects of Applying Ethics to AI

Sampriti Saxena
Data Stewards Network
9 min readNov 22, 2022

Global Perspectives on AI Ethics Panel #11

Group photo of the panelists and moderators

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

The eleventh installment of Global Perspectives on AI Ethics was held on Monday, November 14th. This month, we were joined by the following distinguished faculty for an engaging conversation:

  • Adrian Gonzalez Sanchez, Senior Cloud, Data and AI Specialist for Public Sector at Microsoft (Montreal, QC)
  • Branka Panic, Founder and Executive Director of AI for Peace (San Francisco, CA)
  • Carl Mörch, Co-Manager at FARI — AI Institute for the Common Good (Brussels, Belgium)
  • Emmanuel R. Goffi, Co-Founder and Co-Director of the Global AI Ethics Institute (Paris, France)

The discussion was co-moderated by two of our course leads: Julia Stoyanovich, Director of the Center for Responsible AI at NYU and Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The Governance Lab. Over the course of the conversation, the panelists explored a wide range of topics from the different narratives surrounding AI ethics to the implications of laws on ethics to the applications of AI in healthcare and in peacebuilding work.

The Intersection of Ethical and Legal Approaches

To kick off the conversation, Stefaan invited Adrian Sanchez to talk about the intersection of law and ethics when it comes to regulating data and AI technologies.

Adrian began by pointing out the value of taking a combined ethical and legal approach to regulating technology. He argued that ethics are a key tool for identifying the potential risks of a new technology, while legal tools are able to effectively address these risks through actionable interventions. He found that this approach can help to embed critical accountability measures within the process as well. Finally, Adrian highlighted the role of turning points in motivating regulatory action around technology, citing the example of facial recognition applications saying:

“All of the companies–and this is interesting because it’s mostly the organizations rather than the government–the organizations are saying ‘Hey, we cannot use facial recognition for justice or police purposes in the United States until we have proper regulation.’ And then the regulation comes. And so there is always the notion of a turning point to educate, to alert the stakeholders to the need for regulation.”

Drawing on this idea of a turning point, Stefaan asked Adrian where the role of ethical and legal responses fit in the balance between reactive and proactive policy interventions. Adrian responded by saying he sees a sequential relationship between the two, wherein ethics define principles and practices, which in turn lead to legal regulations. And while regulations are usually reactive in nature, ethics can help start conversations that lead to more proactive approaches.

Before turning the floor over to Julia, Stefaan asked Adrian about the link between regulatory models and diverse geographies. Using the example of the EU and Canada’s data and AI policies, Adrian explained how different regulatory approaches influence one another. In talking about diverse regulatory approaches, he said, “While I’m sure we would all like to do the right thing, what is right may vary from context to context.”

Building Complex Narratives to Embrace Diversity

Following this conversation, Julia asked Emmanuel Goffi to elaborate on his module “Ethics Applied to AI: A Matter of Culture and Words”, looking at the difference between the framings of ‘AI ethics’ and ‘ethics applied to AI (EA2AI), as well as the ways in which the ecosystem can foster more ethical and inclusive discourses and approaches.

In talking about the difference between AI ethics and EA2AI, Emmanuel focused on the importance of words in shaping perceptions around a topic. The framing of ‘AI ethics’, he argued, creates a false sense that there exists a specific form of ethics either developed by or concerning solely AI technologies, discounting the human role in AI. Julia seconded this idea, explaining that such a framing deliberately places the burden of ethical behavior on the machine, absolving the human stakeholders of their responsibility. EA2AI on the other hand, Emmanuel argued, allowed for the complexity of ethics, philosophy and technology, driving a more nuanced dialogue.

Moving away from the discussion of different framings, Julia asked Emmanuel to talk about the ways in which we can foster more ethical behavior in the ecosystem. Emmanuel suggested 4 key interventions. The first was the importance of building public literacy through training courses. The second was to distance the AI ethics discourse from public relations and marketing initiatives around technology. The third was to think critically about the difference between compliance and ethics. And finally, the fourth was to be open-minded and to think critically. Emmanuel said:

“Look, there are different perspectives. You can see that from a different angle. Do not buy things, do not take them for granted…Be open-minded. Try to find information elsewhere, not only in the documents provided by the European Union or UNESCO or big tech companies. Think by yourself…Think about your knowledge.”

As Emmanuel had to leave early, Julia paused here for a question from the audience asking about the ways in which we can capture diverse cultural perceptions towards AI ethics in the discourse. Pointing to the growing interest in the field and the works of scholars like Amana Raquib, Arthur Gwagwa and Soraj Hongladarom, Emmanuel explained that we need to dive in and embrace the complexity of ethical discourses to challenge the current status quo. This will enable the industry to expand its understanding of what is acceptable and what is not acceptable to become more inclusive.

Designing Ethical AI for Niche Applications

Next, Stefaan turned the stage over to Carl Mörch to discuss the role of AI ethics in healthcare, and more specifically in the field of dentistry.

Drawing on his background working on data and technology initiatives in dentistry with Maxime Ducret, Carl shared that there are a lot of hopes around AI and the improvements they’re bringing to the field, however there is less awareness around the risks of these technologies and a lack of guiding ethics to govern their adoption. He shared that today the ecosystem might focus predominantly on the adoption of principles such as prudence, privacy, responsibility, while highlighting the leadership role of the private sector in driving these efforts. When it comes to approaching solutions to the challenge, Carl described the kinds of tools that could be helpful to developers across the lifecycle, saying:

“[The tools] need to help [developers] better grasp and understand what could be the specific ethical challenges of the people working in healthcare, even related to the use of these technologies. And for the people working in healthcare, what are also the specific ethical challenges that I should have in mind when it comes to using AI and data?”

To illustrate the importance of ethical considerations around AI technologies for dentistry, Carl unpacked the example of the Digital Smile Design applications. These softwares visualize the outcomes of dental treatments on a patient’s smile, generally promising transformative results. To answer Stefaan’s question about some of the ethical considerations in such a system, Carl pointed out the risk of inflated promises made by these visualizations, transforming AI into a marketing tool and raising further questions around equity and sustainability in the use of these technologies.

Building on the themes of equity and sustainability, Carl advocated for the ecosystem to consider the use of AI in the context of their moral responsibility to the broader social, political, economic and cultural environments. While this does raise complex questions that can affect the innovation lifecycle, it is important work. Carl also cited the need for more tools and standards of practice targeting non-ethicist stakeholders, like developers working in applied fields of AI.

Going Beyond Military Use, AI for Peace

Last but not least, Julia invited Branka Panic to talk about AI for Peace and the key role of ethics in this space.

The first question Julia posed to Branka was related to the diverse military uses of AI and how this affects the ethical applications of AI. In response, Branka explained that while the military uses of AI are continuously growing and evolving, AI for Peace, or AI technologies to avoid conflict and sustain peace, are lagging behind. In order to move forward, she said:

“We must search for answers in both directions, so what is the ethical application of AI, both in military AI and in AI for Peace? And why is military AI taking so much space in this conversation? Because of course AI has this potential, even with over-promise…there are still so many applications of AI we see that can change war and the way wars are fought in the future.”

Branka also emphasized the importance of taking conversations around the military use of AI beyond the question of “killer robots” to explore how AI technologies can be effectively used to build and sustain peace in the long run while keeping ethics at the center. She made the case for more interdisciplinary conversations and opportunities for shared learning in the ecosystem, drawing connections between her work in AI for Peace with Carl’s work in healthcare and Adrian’s work in law as a case in point.

Shifting away slightly from this conversation, Julia asked Branka about how the ecosystem can engage actors on the fringes of the AI ethics discourse to bring their views to the table, especially in situations where these actors could benefit from ethical peacebuilding technologies. Branka began by drawing attention to the importance of understanding the specific context of conflict or fragile situations, which bring a unique set of vulnerabilities to the table. Rather than being respected as equal stakeholders in the ecosystem, Branka shared that these contexts are more often used as “test beds for new AI applications” due to a lack of regulatory oversight to protect individuals. Furthermore, actors from these contexts are not engaged in the design or development of AI technologies nor in the governance of these systems, impairing their implementation in practice.

To effectively address this challenge of exclusion and discrimination, Branka advocated for the adoption of more participatory design approaches not only in terms of the design and development of technologies, but also in the policy lifecycle. This would provide much needed localized context to ensure the ethical adoption and use of new technologies. Finally, she highlighted the value of the ‘Do no harm’ principles, advocating for their use across the diverse sectors using AI technologies today.

Moving Forward: the Value of Inclusivity and Education

Following an engaging question and answer session, where audience members posed a number of thought provoking questions and insights, ranging from the burden of responsibility in governance to the inclusion of spirituality in discussions of ethics to the challenge of information silos in the ecosystem and more. Each panelist shared important takeaways during the session.

In response to a question about ownership over AI regulations, all three panelists stressed the value of a multi-layered, multi-stakeholder approach to governance. Adrian pointed out that different actors have a huge opportunity to learn from one another, especially in such a fast evolving ecosystem like AI. Carl and Branka made the case for strengthening AI literacies to ensure that stakeholders across all levels were aware of the implications of AI technologies on their lives and their work.

Later on, in talking about the need for a multi-dimensional, multi-stakeholder approach to breaking down knowledge silos in the ecosystem, the panelists underlined the value of inclusivity and the need to not only bring underrepresented voices to the table through stronger participatory approaches, but also to bring diverse fields together to promote an exchange of knowledge and expertise across industries. Similar arguments emerged too when looking at the ways in which spirituality could be brought into conversations around AI ethics. As the panel drew to a close, Adrian, Branka and Carl, all emphasized the indispensable value of learning — from our work, from our mistakes, and from each other.

To watch this webinar and others, visit our course site here. We post new modules and other exciting content every month. To receive updates on the course, please sign up at bit.ly/ai-ethics-course.

--

--