Driving Systemic and Sustainable Change through AI and Data Ethics
Global Perspectives on AI Ethics Panel #10
AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.
The tenth installment of Global Perspectives on AI Ethics was held on Friday, September 30th. This month, we were joined by the following distinguished faculty for an insightful discussion:
- Cansu Canca, Founder and Director of the AI Ethics Lab, Cambridge, MA
- Javier Camacho Ibáñez, Teacher and Researcher at ICADE, Comillas Pontifical University and Director, Sostenibilidad Ética, Madrid, Spain
- José-Luis Fernández-Fernández, Professor at ICADE, Comillas Pontifical University, Madrid, Spain
- Nuria Oliver, Scientific Director and Co-Founder of the ELLIS Alicante Foundation, Alicante, Spain
The discussion was moderated by Stefaan Verhulst, Co-Founder and Chief R&D Officer of The GovLab and one of our AI Ethics: Global Perspectives course leads. In an hour-long discussion, interspersed with questions from the moderator and the audience, the panelists delved into a range of topics spanning from different approaches to ethical AI in the field to the business case for ethics to the role of data ethics in global crises.
The PiE Model in Practice
To open the panel, Stefaan invited Cansu Canca to talk about her module, “An Ethics Model for Innovation: The PiE Model”, focusing especially on how to implement the PiE model in practice.
Cansu began by sharing a bit of background on the Puzzle-solving in Ethics, or PiE, model with the audience. The model works to bring ethics into the innovation process in a collaborative, dynamic and inclusive manner. The model consists of 4 major components, all rooted in AI Ethics: (1) Analysis; (2) Roadmaps; (3) Training; and (4) Strategy. The end goal of this model is to drive systemic change, so that with each new project, the organization is armed with all the necessary tools and processes to implement ethical AI. In talking about the goals of the PiE model, Cansu said:
“We want to make sure that we can put in place systems, guidelines, tools, organizational structures, so that [responsible AI] becomes a common practice. You don’t discover the wheel from the start every time you get a new project, but you have guidelines and tools to help you, and you know who to reach out to when there’s a question.”
Building on the idea of driving systemic change and working across the AI ethics lifecycle, Stefaan asked Cansu to share some of the key trigger points that call for ethical assessment and how these landmarks factor into the roadmap. Cansu began her answer highlighting what shouldn’t be done, which is to engage in an ethical assessment at the end of a project right before a product launches, or only in the early days of design, shortly after which the team forgets about ethics. Ideally, according to Cansu, ethics would factor into the development process from start to finish, taking shape alongside the product itself.
Following this, the conversation shifted to look at the different sectors where the PiE model may be applied. Cansu shared that following its official launch in 2018, the PiE model has already been applied across a number of different industries from the financial sector to telecommunications to security services across both the public and private sector. She also pointed out the importance of working with actors across all levels within organizations to help both balance the responsibilities of ethical AI and to ensure the adoption of ethical practices in a bottom-up manner.
Fostering Moral Agency and Responsibility in the Ecosystem
Following this discussion, Stefaan asked Javier Camacho Ibáñez and José-Luis Fernández-Fernández about the manifestations of moral agency in AI technologies and why businesses should be concerned with moral agency.
José-Luis responded first finding intersections between Cansu’s work on the PiE model and their work on moral agency. He pointed out that there is an important distinction to be made between moral good practice and moral philosophy. In talking about the focus of his and Javier’s work, José-Luis said, “We are putting the focus on moral philosophy, and not just on moral good practice. And we are looking for good practices of course, but perhaps the best practices — it starts with a good theory.”
In terms of moral agency, José-Luis explained that in traditional philosophical theories, the moral agent is a human person, due to their freedom to make choices and their natural intelligence. Interestingly, when compared to the capabilities of AI, human intelligence appears very small. While by definition this ought to transfer moral agency to AI, José-Luis argued that since AI is a product of human intelligence, moral agency remains a human responsibility. In terms of how this moral responsibility manifests in the field, José-Luis made the case for industry actors across all levels to adopt moral responsibility while keeping humans at the center. He concluded his remarks underlining the role of global administration in driving collective, systems-level change.
Moving away slightly from the question of moral agency, Stefaan addressed the second question to Javier, asking him about the importance of building trust between actors in the AI ecosystem.
Drawing from his background in the business world, Javier first pointed out that AI ethics can learn a lot from the advances of business ethics and the many ethical theories founded in the field. He highlighted the role of AI as a mediator for moral agency between human and corporate agents, echoing José-Luis’ point about moral responsibility remaining solely with human agents. What is important, he argued, is to add an ethical dimension to all of the work being done. To further explain this concept, Javier shared the analogy of an iceberg with the audience:
“AI ethics is really the tip of the iceberg, but everything below, so the real important thing is the rest of the issues companies have been working with for a certain period of time: data governance, culture, procedures, etc. So we should not be blinded by the hype on the tip of the iceberg, but really reflect about what is transforming how we adapt our existing procedures, culture, etc. And I think that’s the best way of building trust.”
Data and AI Ethics in Fighting the COVID-19 Pandemic
Last but not least, Stefaan turned the stage over to Nuria Oliver to talk about the role of data and AI ethics in her work leveraging data science to fight the COVID-19 pandemic.
Nuria began by elaborating on the intersection of data and AI ethics and the pandemic. The COVID-19 pandemic, she pointed out, was the first pandemic for which we had such large amounts of data, and assuming the data is an accurate representation of reality, the aspiration is to use data to support policymaking. This is where ethical challenges emerged: What data is being used? Are there privacy implications surrounding the use of this data? How has the data been captured? Are there any underlying biases?
In order to address these challenges, Nuria shared that she had participated in ethical councils (like the one organized by the Belgian government at the start of the pandemic) and championed the use of anonymized mobile data. She also described the importance of multidisciplinary teams and the role of a chief data protection officer in ensuring that ethics remained central to their work. Moreover, Nuria found transparency and data sharing to be a key ethical dimension in their work, saying:
“We also had perhaps an ethical dimension in the very large scale citizen survey that we launched called the COVID19ImpactSurvey. In this case, it was a completely anonymous survey–it is impossible to do any inference of any personal attributes. But it was more an ethical obligation that we felt we had to share the answers from this large scale survey that became very popular with everybody.”
Nuria also briefly discussed the additional ethical dimensions of the pandemic when it came to contact tracing technologies, before turning to some of the lessons from her work that could be applied to future crises, focusing on how we can bring ethics into the use of data and AI in the field.
Nuria shared two major lessons with the audience. The first was the need to accelerate the digitization of public administration services. In her work, she found that public services often lagged behind the private sector in terms of AI and data technologies, which impaired their ability to respond efficiently to the pandemic. The pandemic turned into a catalyst for this much required digitization. The second lesson was the need for effective data and AI ethics frameworks. While a number of data protection measures exist, like the GDPR in Europe, Nuria made the case for more effective ethical frameworks as well. While the AI Act begins to help with this challenge, Nuria felt that more could be done to promote ethics in the ecosystem.
Next steps: building lasting ethical AI solutions
The discussion concluded with an engaging question and answer session, with audience members posing thought provoking questions to our panelists on topics ranging from the costs of ethical AI to the business case for ethics to the social impacts of AI technologies. Each panelist shared key takeaways during the session.
In response to a question about the perceived high costs of ethical AI, Cansu shared her approach to developing ethical frameworks in AI as an investment, rather than a burden. While she was clear that there are costs associated with building ethical systems, she felt that thinking about these costs as investments is a better approach. From a business perspective, the return on investment emerges when it comes to regulatory compliance and public perception. As awareness around ethical AI grows, the push for ethical technologies from consumers will only grow. This she believes will help more organizations begin to take into account ethics in the development of their technologies.
Building off this question, Javier was asked about the business case for business and AI ethics. In his work, he has found that a major motivating factor behind the business case for ethics is regulatory compliance. In Europe, he pointed out that the AI Act has been a key player in driving businesses to adopt ethics. He also shared the value of a risk assessment model, where ethical considerations can help businesses effectively evaluate and mitigate risks. In the case of startups–ethics appears to come second to questions of survival. However as ethics become more central to conversations in the business ecosystem, Javier was optimistic that this trend will push more businesses, both large and small, to adopt ethical approaches.
Nuria had the last word. She focused on the unique charter of her organization– the ELLIS Alicante Foundation–a unit of the larger ELLIS Network of 35 organizations spread across Europe and Israel. The goal of the ELLIS Network is to contribute to Europe’s sovereignty in AI technologies by contributing to research across the continent to attract top talent and position Europe as a global competitor in the AI ecosystem. The ELLIS Alicante Foundation, on the other hand, is unique with its singular focus on human-centric AI to help promote the positive social impacts of AI technologies. This work will be central in helping develop ethical AI applications with humans at the center.
To watch this webinar and others, visit our course site here. We post new modules and other exciting content every month. To receive updates on the course, please sign up at http://bit.ly/ai-ethics-course.