Global Perspectives on AI Ethics Panel #6: Unpacking the challenges of ethics in AI, the intersection of AI and data activism, and the philosophy of AI in smart cities

Sampriti Saxena
Jan 6 · 7 min read

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

The sixth instalment of Global Perspectives on AI Ethics was held on Tuesday, December 14 and featured insightful reflections from:

  • Renée Cummings, Data Activist in Residence at the University of Virginia, USA
  • Ricardo Baeza-Yates, Director of Research at the Institute for Experiential AI of Northeastern University, USA
  • Viviana Polisena, Professor-Researcher at the Catholic University of Córdoba, Argentina

AI Ethics: Global Perspectives course leads Julia Stoyanovich, Associate Professor of Computer Science and Engineering and of Data Science, and Director of the Center for Responsible AI at NYU, and Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab, moderated the discussion. The panelists explored the intersection of data activism and bias in AI, the rapid development of AI and the risks associated with its implementation, and the need for a more robust understanding of the ethical implications of AI in the field.

Understanding data activism in the field of AI

To open the panel, Julia asked Renée Cummings to speak about the intersection of AI ethics and data activism and the ways in which we can foster data activism in the field.

Renée outlined the adoption of data activism in the context of AI and shared numerous examples of organisations and individuals playing a key role in this space, such as Data for Black Lives, AI for Good, and the work of Joy Buolamwini, Cathy O’Neil and Ruha Benjamin. When asked to describe data activism, she replied:

“Data activism to me is the lifeblood of data ethics and really it is an extraordinary way for us to rethink the ways in which we are doing data ethics. Because for many people at the moment, it seems to be something that is very passive and people want something that is active and something that is proactive. I think now that we are talking about data protection and data privacy and all those great things that spiral data activism into action is us understanding that data is beyond an economic model. It has a social life. It has a cultural life. It has a political life. And we are using data at the moment to create, or to reinvent, many of the systems that we have used in a society. We really have to think about data activism as a way to bring the requisite social and political change that is required to ensure that we really have responsible and trustworthy AI that deals with issues of accountability and transparency and explainability.”

Renée went on to unpack strategies to foster data activism in the field of AI. She pointed to a strong, interdisciplinary understanding of the power of data as the foundation for data activism and explained how applications of AI and data can drive intergenerational trauma using the example of facial recognition technology in criminal justice systems. She concluded her remarks with a call to action to use data activism to rethink and reimagine data and AI ethics through a risk and rights based approach as we redesign our societies through data.

The impacts of bias on AI ethics

Following this discussion, Stefaan invited Ricardo Baeza-Yates to give a brief overview of his lecture on the challenges of ethics in AI focusing on the issue of bias in AI. Ricardo has worked on understanding and mitigating bias in technology for over 10 years, beginning at a time when bias was not a well-defined problem in the field. Ricardo talked about how awareness is key to mitigating bias, especially since it is impossible to be truly rid of bias:

“Part of the problem is that many times we are not aware [of bias] and the first step to solve it is to become aware of it. Some people don’t want to become aware of it when we go to social and cognitive biases. So for that reason, because we are not always aware, we can only mitigate the problem. I don’t think we can get rid of the problem, so at the beginning I used the word ‘de-bias’ but I think it is better not to use it because that is also biased to what we can do and that’s not true most of the time especially for the [biases] we don’t know.”

The discussion later shifted away from bias to look at the shared similarities between privacy and ethics and how privacy can be incorporated into conversations around AI ethics. Ricardo pointed to three points of intersection between privacy and ethics: first, questions of privacy and data protection are critical to understanding the ethical implications; second, both privacy and ethics are informed by networks and communities and are based on shared values and trusted systems; and finally third, ethics and privacy are collective processes that require the involvement of actors at every level of a system from the providers to a company to its clients.

Ricardo finished his explanation of privacy and ethics by unpacking the idea that you cannot have AI ethics without first having ethics. To illustrate this idea, he looked to the example of Big Tech:

“In Big Tech, we are seeing that basically part of the problem is that they have an ethical blindspot, where their goals really don’t care about what will happen with the rest of the world to achieve those goals. And that is a problem because we need not only regulation at the legal level but maybe more awareness inside companies and inside people that they need to balance their ethics with the ethics of their companies, which is a very tough problem […] because it comes down to your job versus your belief in what’s right.”

AI ethics in smart cities

Next, Julia turned the stage over to Viviana Polisena to elaborate on her lecture about the Philosophy of AI in the case of smart cities.

Viviana began her remarks with an overview of some of the challenges associated with the implementation of AI in smart cities and how philosophy can play a role in understanding and mitigating these challenges. She shared that the key to achieving the ethical implementation of AI in smart cities is:

“The responsibility for each decision should always return to the human being and not to the machine. The use of AI should strive to decrease pain and suffering, promote well being and compassion for life. And for any of these to be possible, companies and government agencies that develop or use AI should be accountable to their citizens.”

She highlighted the importance of inclusivity and equality in approaching ethical implementations of AI, as well as the need to develop a common ethos among the different actors engaged in the process. Later, the discussion moved towards the philosophy of AI and specifically the fundamental principle that “all artificial artefacts must support natural life and human life and non-human life”. When talking about this principle in relation to rights, smart cities and AI, Viviana said:

“If artificial intelligence ever succeeds in having rights, these should never overlap or contradict human rights. The care and protection of human and non-human life must be the centre of innovation. All artificial or virtual artefacts should be programmed with algorithms that collaborate with respect and defence of human and non-human life […] From these notions an ethical body can be built suitable for the new era, to which could be added the principles of bioethics: autonomy, justice and beneficence. No artefact or algorithm should be allowed to cause harm to life.”

Next steps: Diverse consciences in AI, proactive approaches to AI ethics and addressing the delay between ethics and technology

To conclude, Julia and Stefaan opened the panel to discussions among the panelists to unpack the various points of intersection between their different areas of focus.

Renée responded first, emphasising the important role of data activism in criminal justice using smart cities as an example. She also highlighted the centrality of civil rights and human factors in discussions of criminal justice, data activism and AI ethics. In drawing the three together, she said, “Criminal justice should be the conscience of artificial intelligence, and data activism should be the conscience of data scientists.”

Next, Ricardo talked about the delay between ethics and technology building upon earlier remarks from Viviana and Renée. He said, “If you look at history, ethics is always running behind technology, and this is not the first time.” He cited numerous examples from history, such as the development of chemical and nuclear weapons. Ricardo also agreed with Renée, saying that justice is like the conscience of AI, answering key questions like whether we should apply AI and in which situations can it be considered ethical to implement AI technologies.

Viviana’s remarks, translated by Ricardo, focused on the rapid development of AI technologies and the need to slow down these processes to take the ethical and philosophical considerations into account. She called for deeper reflection and understanding, as well as increased collaboration as ways to mitigate the ethical implications of new AI technologies.

Lastly, Stefaan raised the idea of the oil spill model in response to threats and dangers, wherein we need a tragedy or emergency before anyone acts to address existing concerns. All three panelists shared their reservations around this approach to AI ethics and advocated for the use of ethical AI, activism and awareness to avoid such a calamity. Renée captured the need for AI ethics when she said, “We don’t ever want to get to the oil spill in AI, because given its power and its pervasiveness, an AI oil spill I don’t think is something the world ever wants to experience. So this is why we need to take that proactive approach to what we’re doing.”

To watch this webinar and others, visit our course site here. We post new modules and other exciting content every month. To receive updates on the course, please sign up at http://bit.ly/ai-ethics-course.