Understanding Ethical AI Practices across Diverse Contexts

Global Perspectives on AI Ethics Panel #8

Clockwise from the left top corner: Stefaan Verhulst, Piet Naudé, Ashley Casovan and Christoph Lütge.

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

The eighth installment of Global Perspectives on AI Ethics was held on Thursday, April 14. This month, we were joined by the following distinguished faculty for an insightful discussion centered on diverse ethical AI practices from around the world:

  • Ashley Casovan, Executive Director, Responsible AI Institute, Montreal, Canada
  • Christoph Lütge, Director, TUM Institute for Ethics in Artificial Intelligence (IEAI), Munich, Germany
  • Piet Naudé, Professor of Ethics and TUM Ambassador, University of Stellenbosch Business School, Bellville, South Africa

AI Ethics: Global Perspectives course lead Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab, moderated the discussion. Over the hour-long conversation, panelists unpacked the importance of responsible AI and the many challenges in implementing responsible AI practices across different fields and contexts. They also explored the idea of auditing technology and the key role non-governmental organizations could play in this process going forward.

Implementing responsible AI at scale

To open the panel, Stefaan asked Ashley Casovan to speak about the role of context in different approaches to responsible AI practices.

Ashley began by highlighting the critical role of context in determining the different approaches to implementing responsible AI practices in the field. She explained that given the diverse use cases of AI technology, it would be near impossible to have one system of rules that govern the industry as a whole. Government guidelines and similar policy tools, she felt, could serve as important guardrails for responsible AI and act as the foundation for further governance measures but would not necessarily form strong governance frameworks on their own. Instead, a tool like an algorithmic impact assessment, which can be tailored to meet a specific context’s standards of practice, would be required for more effective contextual governance. In further describing the role of context in implementing responsible AI practices, Ashley said:

“We recognise that having multiple, different stakeholders with different types of perspectives both in academia, civil society but in industry itself, it’s really important to actually test out again what does ‘good’ look like and what is acceptable within different cultures as well.”

Building on the role of context in driving responsible AI, the conversation shifted to look into the actors involved in promoting responsible AI practices and responsible AI certifications. Ashley began by pointing out the role of auditors in evaluating responsible AI, before diving into the different levels of evaluation. At present, she shared, most auditing is done at the level of the organization, missing the nuance of use-cases and product level analyses. She made the case for system level evaluations and conformity assessments as potential solutions to bridge this gap in current practice. She offered another potential solution to auditing technology in the form of independent entities, similar to those that govern cyber security. Part of the solution, she suggested, is to train auditors through independent programs like the Responsible AI Institute’s newly launched RAII Certification program. Ashley concluded the discussion by sharing current examples of auditing networks such as the ongoing project working to calculate the carbon footprint of streaming services.

The case of autonomous vehicles

Following this discussion, Stefaan invited Christoph Lütge to elaborate on his module on the ethics of autonomous vehicles. Christoph began by addressing the question of how applicable the Trolley Problem thought experiment remains in the context of autonomous vehicles. In response, he briefly summarized the Trolley Problem before drawing parallels between its evolution and the case of John Rawls’ veil of ignorance. He argued that even though Rawls’ model was impractical to use, other models like James Buchanan’s veil of uncertainty were unable to replace it, in part because concepts of uncertainty and risk were not as exciting to the public. Similarly with the Trolley Problem, though it is no longer as relevant as it once was in analyzing risk, it has not yet been replaced by newer thought experiments.

Later the discussion shifted away from the philosophical lens to look at the regulatory environment around the use of AI technology in autonomous vehicles. In talking about regulating AI to promote the ethical use of these technologies, Christoph said:

“Within the range of AI applications, the autonomous driving field is one that really is desperately in need of regulation. The companies are also asking for it. They need to have more certainty about what can be done, what should be done and what is not to be done. And while we have seen some efforts, in the US for example, on the state level, but not so much on the national level in other countries as well… the move to regulation is still one that leaves a lot of questions open because some questions cannot be solved in the abstract, especially about this technology.”

In practice, Christoph explained that effective regulation will have to take on a multi-level, multi-stakeholder approach. While it is important to also consider the general, he emphasized that a successful regulatory framework will also largely depend on the details:

“It’s all about the details. It’s about the developing process. It’s about getting additional competencies into the development and design stage. And there are a lot of questions still to be asked, which are not purely technical ones, but are ethical one in the general sense, so we need this collaboration again between ethics and technology.”

To conclude, Christoph shared a major takeaway from his time at the World AI Forum, where he was happy to find that ethics is very much at the center of conversations around AI technologies. Principles, such as responsibility, trustworthiness, transparency and explainability to name a few, formed the basis of much of the work presented. To sum up his perspective, Christoph shared a favorite saying: “AI will not fly without ethics.”

Ethical ambiguities in approaching AI technologies

Last but not least, Stefaan turned the stage over to Piet Naudé to talk about his experience with different approaches to ethical AI in an emerging markets context and the implications such an ecosystem may have for the implementation and regulation of AI technologies.

Drawing on his work in South Africa, Piet opened with two important philosophical points building upon Christoph’s comments before him:

“The purpose of technology is not a technological question…You cannot move on without ethics — it’s embedded and there’s no way you could. And secondly, I think as ethicists, we should be careful not to fall into either of two traps — it’s either to fully embrace artificial intelligence as a magical solution for everything and just endorse it ethically, and the other is to just vilify it and to say this is bad and the world is going down the drain… I think we need a very very reasoned and nuanced approach to this. Like all human systems, AI and other technologies are always also good and always also potentially misused.”

Going back to the context of emerging markets, Piet focused on the question of employment, automation and a post-work society. In the current context, he argued, a post-work society is a non tangible idea and would only really be feasible with government infrastructure that facilitates systems like universal basic incomes. In the so-called developing world, such a system looks near impossible given the scale of countries like China and India for example. He also made the case for stable democracies and guaranteed freedoms in a post-work transition to mitigate the risk of abuses of power. Piet pointed to independent oversight through non-governmental organizations as a potential solution to such a risk but also weighed it against the challenge of weak institutions in emerging economies.

The final challenge he highlighted was the massive digital divide both within and between countries which would create barriers to ethical implementations of AI technologies. In the face of these challenges however, Piet remained cautiously optimistic pointing to recent technological innovation in the banking, education and healthcare sectors, which have brought both tremendous benefits to society but are also accompanied by increasing inequality and unethical practices.

Following this discussion, Stefaan asked Piet about whether or not it is ethically conscionable to introduce technologies where they may be abused. In his reply, Piet drew on the value of asking this question itself, as it brings out our uncertainty surrounding technology and the importance of ethics. From his perspective, he felt that non-governmental organizations will be invaluable in overseeing the ethical implementation of new technologies going forward.

Next steps: the value of nuance, diversity and equality in implementing and regulating AI technologies

After an engaging question and answer session, with audience members posing thought provoking questions to our panelists, Stefaan opened the stage for final takeaways. He asked each panelist if they had a magic wand, what is one thing each of them would prioritize in the context of AI ethics and responsible AI.

Ashley responded first, calling for a more nuanced approach to questions around ethical and responsible AI. She argued that without getting down to the detail of what ‘good’ looks like in the context of regulation or what the safe stopping distance between two autonomous vehicles ought to be for example, conversations around ethical and responsible AI would remain just conversations without leading to any tangible change in the field. And of course, the importance of regulation in the industry.

Next, Christoph shared that he believed in the importance of bringing different actors together in conversation to promote effective action around ethical AI. He pointed to past examples of weaker regulation and felt that by bringing diverse perspectives to the center of policy discussion, we will be able to achieve more sustainable change in bringing ethics further into the field of AI.

Lastly, Piet wished to see ethics focused on the equality of access and distribution to the benefits of AI. With technology at the center of our transforming society, access to technology and its benefits is no longer an option and is quickly becoming a requirement for social participation and inclusion.

To watch this webinar and others, visit our course site here. We post new modules and other exciting content every month. To receive updates on the course, please sign up at http://bit.ly/ai-ethics-course.

--

--

--

Responsible Data Leadership to Address the Challenges of the 21st Century

Recommended from Medium

Our experience with designing for voice interactions

List a Few Innovative Ideas using AI and ML that are shaping the way we look at the world

List a Few Innovative Ideas using AI and ML that are shaping the way we look at the world

ARTIFICIAL INTELLIGENCE AND GAME DEVELOPMENT.

Uses of AI to improve Supply Chain Management

Can AI solve the VAR headache?

Meet Will: A ‘Digital Human’ Teaching Kids About Energy

Seminar — ML from Scratch!

The AI Dialogues

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sampriti Saxena

Sampriti Saxena

More from Medium

AI is the future of education

An Introduction to Markov Blankets and Information Theory

Meet the Sales & Marketing team: How we support clients and experts

Outvise team

The Story of The Seaweed Company