Global Perspectives on AI Ethics Panel #3: EU’s new AI regulation, the use of social robots in care settings, evaluating training datasets, and the need for more independent research on AI Ethics

Aditi Ramesh
Data Stewards Network
6 min readApr 29, 2021

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, The Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

On Wednesday, April 21st, the coordinators of AI Ethics: Global Perspectives hosted a live panel to discuss and debate topics presented in this month’s course modules. Panelists answered questions such as: What is the role of social media platforms in moderating content online? How can organizations implement ethical AI practices across their work? What are some ethical considerations that must guide the design, deployment and use of social robots in care settings? How can we learn from music and physics to investigate the fundamental “relativity” of AI? Participating AI experts included:

  • Ken Ito, Associate Professor in the Graduate School of Interdisciplinary Information Studies at the University of Tokyo;
  • Richard Benjamins, Chief AI and Data Strategist at Telefónica;
  • Serge Abiteboul, Senior Researcher at the French Institute for Research in Computer Science and Automation;
  • Shannon Vallor, the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh.

Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab and Julia Stoyanovich, Director of the Center for Responsible AI at NYU moderated the discussion. During the event, panelists discussed the European Union’s new proposal to regulate artificial intelligence and its implications for high risk AI applications. They also spoke about the responsibility of social media companies in moderating content on their platforms, as well as the need for more independent research on the ethical implications of AI and technology.

Risks, limitations, and advantages in deploying social robots in care settings

To open the panel, Julia Stoyanovich asked Shannon Vallor about the risks, limitations, and advantages in using social robots in care settings. Shannon highlighted two primary challenges to the development and use of ‘carebots’. First, care facilities are unpredictable and AI is still not sophisticated enough to respond to this unpredictability. Second, and more importantly, social robots today lack an understanding of human relationships and emotions, and therefore may not be able to provide the same level of response to a patient that a human provider would.

Then, Shannon described the potential benefits of ‘carebots’ to augment caregiving, enable more independence for recipients of care, and provide relief for human caregivers that may be overstretched and undersupported. They represent another example of how AI can provide more value as an extension of human capacity rather than a replacement for it.

With these considerations in mind, Shannon then reviewed some ethical considerations that she believes should guide the design and deployment of ‘carebots’ “The design of computational systems that humans interact with, should be participatory and driven by the people who are going to be affected by these systems,” she said. “We also need to understand cultural differences globally, and how people want to relate to robots and machines. And so we need to avoid a one size fits all approach”

Do Carebots Care? The Ethics of Social Robots in Care Settings by Shannon Vallor

The need for greater scrutiny and evaluation of training datasets

Next, Ken Ito reflected on Stefaan Verhulst’s questions around algorithmic bias and relativity in AI. Ken emphasized how developments in history, science, and music can inform our understanding of algorithmic bias. Just as every precise musical fluctuation contributes to a final composition, every data point fed into an algorithmic system affects the ultimate product and subsequently, its impact on people.

Ken concluded by noting that, in order to develop unbiased algorithms, we need to to carefully monitor and evaluate the datasets being fed into these systems.

“Fundamentally, [algorithmic bias] is a data bias problem. Every statistic in the mother data set should be precisely measured, announced, and overcome. [..] A neutral evaluation of the mother data set, by a profit neutral third party is key.”

Algorithmic Bias and Relativity in AI by Ken Ito

The role of AI in assisting with online content moderation

Following this, Julia prompted Serge Abiteboul to speak about the importance and role of AI in online content moderation. Serge initially provided some background, noting varying international interpretations of freedom of speech. As a result, there are no standardized procedures or requirements for content moderation, so social media companies are rarely held responsible for content posted to their platforms — especially in countries such as the United States.

Serge noted how ambiguities around online language (including slang, varying sense of humor) and the difficulty of using human labor to parse through thousands of pieces of online content a day present challenges for content moderation. Even if platforms train algorithms to identify and remove harmful content, Serge notes, there are no clear definitions or boundaries of what this content consists of.

Serge then spoke about the potential role of algorithms in assisting with content moderation. He noted, “It is difficult to get precise measurements [of harmful speech], and [social media] companies are keeping these processes very opaque. Algorithms are getting better, but they are not perfect.” That said, more robust applications of AI could still aid with online content moderation as we continue to evolve and iterate on AI systems.

Content Moderation in Social Media and AI by Serge Abiteboul

EU’s regulation on Artificial Intelligence and defining modern-day AI

In his lecture, Richard Benjamins proposed guidelines to support organizations seeking to use AI responsibly. On the day of the panel, the EU released a proposal to regulate the use of AI. Richard commented on the importance of this regulation and its implications for organizations in the EU:

“With the new regulation, there is actually a specified process companies and governments must follow if they want to deploy a high risk AI system. And indeed, you also have to [follow this process] before you announce [your product] to the market. So this is gonna have quite a significant impact for high risk AI applications.”

In order to appropriately regulate technology, we must first define its terms and purpose.

Later in the panel, Julia asked participants to take a step back and clarify what we mean when we talk about AI. While most panelists agreed that discussing the definition of AI is not as important as ethical concerns and risks, Richard noted some particular qualities of modern-day AI systems.

“First, most AI systems today are completely data driven, which means that the machine learns patterns and makes mistakes. If the system is learning once it is in operation, and it continues to learn without any supervision, then the system might end up doing something that nobody had foreseen. […] Some algorithms are difficult to understand, even for programmers. And therefore, there are all kinds of ethical questions within these kinds of systems.”

The responsible use of AI in organizations by Richard Benjamins

Looking ahead: Building trust within social-technical systems and the need for more research on AI Ethics

Richard and Serge both discussed the need for more directed funding and research on ethics, not only for AI, but for technology more broadly. AI research ought to be independent of commercial or private sector interests, to generate more public-facing knowledge on the benefits and risks of AI.

In her concluding remarks, Shannon spoke broadly about the need to build trust within socio-technical systems, especially, for example, in care settings. She said, “Trust means putting something of ours that we care about in the hands of someone who can care for it, who we trust to care for it. And because machines today can’t care, we must never make the mistake of thinking that we trust or ought to trust machines, as opposed to socio-technical systems, of which machines can be one part.”

A full video of this panel can be found here.

We will be releasing new modules early May at https://aiethicscourse.org/. To receive monthly updates to the course, please register using the form here.

--

--