Global Perspectives on AI Ethics Panel #4: data equity, perspectives from the Global South, high risk profiling systems, and the role of universities
AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.
On Wednesday, May 11, the following AI experts participated in the fourth installment of Global Perspectives on AI Ethics:
- Celina Bottino, Project Director at the Institute for Technology & Society of Rio de Janeiro (ITS Rio);
- Fabro Steibel, Executive Director at ITS Rio;
- Yves Poullet, Co-Chairman of the Namur Digital Institute (NADI) and Eméritus Professor at University of Namur; and
- Julia Stoyanovich, Assistant Professor of Computer Science and Engineering at Tandon School of Engineering, Assistant Professor of Data Science at Center for Data Science, and Director at Center for Responsible AI, New York University.
AI Ethics: Global Perspectives course leads Christoph Luetge, Director of the TUM Institute for Ethics and AI (IAEI), and Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab, moderated the discussion. This event was hosted as a part of AI Week 2021, a week-long tech festival dedicated to artificial intelligence.
During the panel, participants discussed the differences between AI developments in the Global North and Global South, the responsibilities technologists and engineers have when designing new technologies, and some notable regulation initiatives in Brazil. They also spoke about the importance of universities as a space for multidisciplinary conversations and inter-sectoral collaboration.
Global South vs. Global North: perspectives and opportunities
To open the panel, Stefaan Verhulst asked Celina Bottino and Fabro Steibel about what distinguished the approaches to AI ethics in the Global North and Global South. Fabro noted key differences in priorities on the use and development of AI systems. In the North, he said, the focus lies on principles, ethics and use. In the Global South, he argued the focus is on access, inclusion, and education.
Celina offered a different perspective. She noted, “The Global South is generally consumers, rather than producers of AI technology. This leads to a higher risk of Global South countries being at a disadvantage because their perspective is not taken into consideration.”
Fabro added that countries in the Global South are using and generating significant amounts of data. This data can support the development of new AI technologies. He noted, however, that these countries still lack the necessary data protection regulation or enforcement to make use of this data appropriately.
“The EU, for example, has strong frameworks for data interoperability, data portability, data sharing and so on. In the Global South, while we have some key principles for data protection, we do not have well developed instruments for data trusts, data sharing, or even a competitive approach to the data ecosystem.”
Where does the responsibility lie? The role of technologists and engineers
Following this discussion, Christoph Luetge invited Julia Stoyanovich to give a brief overview of her lecture Building Data Equity Systems. Julia began by describing what she envisions as an equitable data system:
“We characterize data equity systems as a response that the socio-legal-technical community should have to the issues that we see, where resources are being allocated in ways that are inequitable, and this is done with the help of algorithmic tools.”
Julia stressed the need to make AI systems socially sustainable — to provide equality outcomes rather than equality of treatment. She also highlighted two gaps that society–including especially technologists, policymakers, and companies–still need to address in order to reach this goal. First, we need to understand how to make algorithmic systems equity-aware, embedding both legal and policy compliance into these systems. Second, the general public should be more aware of what algorithmic systems can and cannot do.
Most importantly, technologists should be held responsible to ensure social sustainability for the kinds of systems they design. As Julia described,“Technologists who are building systems, I think, should be held responsible to ask [themselves] whether they should be building systems of this kind. And if they are, they should make sure that the goal of supporting equity is reflected in the way we ask questions of these systems, and are embedded in the objectives of computation.”
When asked if there is a fundamental role for ethics in regulation, Julia responded by stressing that it is not just about regulation. Ethics involves multiple stakeholders–as a society, we need not only legal interventions but also societal involvement and broader conversation on the ethical implications of AI.
Assessing high-risk profiling systems
Next, Stefaan invited Yves Poullet to elaborate on his lecture, Profiling in the Age of AI.
AI, Yves said, creates new individual and collective risks to profiling activities, which threatens democracy, social justice, and society at large. These systems can drive surveillance activities or discriminate against home-buyers. Platforms and large companies hold vast amounts of consumer data; the more data one has, the more powerful their algorithms will be.
Stefaan asked Yves about the use of AI for security purposes. Yves discussed the use of facial recognition technologies by law enforcement agencies and the threat this use poses to democracy. In the United States, for example, facial recognition has led to wrongful condemnations and accusations against citizens.
Then, Yves discussed the EU’s new regulation on AI and its implications for high-risk systems such as facial recognition technologies. Yves said, “Not all AI systems should be treated in the same manner. It is quite clear that each has different risks. And if we have a high risk system, like facial recognition, [the regulation] says, we need to have what we call a priori and continuous assessment by a third party — that’s quite new.”
Looking ahead: conversational AI, the role of universities, data equity in the Global South
The panel closed with questions from the audience. One audience member asked about ethical implications of conversational AI, including messaging and speech technologies that offer human-like interactions. Julia responded by discussing some issues related to the use of conversational voice.
“There are plenty of discriminatory signals that may seep through, that have to do with people’s cultural background, or disabilities. For example, individuals who are not looking at the camera, may be blind, right? So I think that before we jump at trying to pick up some arbitrary correlations from the data, we need to be asking very, very carefully, what reason we have to believe that these features are actually relevant for the particular task at hand.”
Then, Celina was asked a question about regulation initiatives in Brazil. She noted the need to address an asymmetry of information as both policymakers and the public sector lack an understanding of the technology, making it difficult to form concrete policies on the regulation of data and AI.
Stefaan then posed a question to Fabro and Celina, to relate the idea of data equity systems for purposes of the Global South. Fabro said, “In the Global South, 30–40% of people are unconnected, and they do not produce data the way we know how. So, we must connect them, protect them, and make sure the data they provide is returned to them as assets. Also, we need to consider how to provide a good society for those who are connected.”
Stefaan concluded the panel by resurfacing a point Celina made earlier, about the role of universities in opening discussions on the ethical use of AI, asking Yves and Julia to comment on this idea. Yves suggested that universities can play an important role in facilitating multi-disciplinary and multi-stakeholder conversations on AI, bringing together computer scientists, engineers, philosophers, and others. Julia also noted, “any university has a huge role to play, to step also into policy impact, into public engagement and into educating a broader public beyond just students — to include the current and the future practitioners.”