Global Perspectives on AI Ethics Panel #5: Confronting the realities of data subjects, AI governance at the local level, and building an ethically conscious business culture

Aditi Ramesh
Data Stewards Network
5 min readJul 1, 2021

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

The fifth installment of Global Perspectives on AI Ethics was held on Tuesday, June 22 and featured expert reflections from:

AI Ethics: Global Perspectives course lead Julia Stoyanovich, Director of the Center for Responsible AI at NYU, moderated the discussion. During the event, panelists reflected on the need to build a deeper, collective understanding of the realities of data subjects, the importance of citizen assemblies and AI literacy programs at the local level, and tech ethics skill development and training within businesses.

Confronting human realities in data

To open the panel, Julia asked Susan-Scott Parker to speak about the growing use of AI-powered human resource technologies and the threat they pose to people with disabilities.

Susan described the current state of AI development and its failure to consider people with disabilities:

“If we look at the actual AI-powered assessment tools, neither their content, their usability nor their accessibility have been validated or tested on job seekers or employees with disabilities. This unacknowledged bias then combines with the unfair, indeed, inherently discriminatory treatment, triggered by how the automated processes actually work in practice.”

Susan then described the four-dimensional nature of this problem: 1) data fed into AI systems are often biased against people with disabilities; 2) standardized AI development processes trigger unfair treatment; 3) HR buyers, diversity managers, AI developers, and other stakeholders are not well-versed in disability discrimination issues; and 4) developers too often ignore the impacts of AI tools on people with disabilities.

She then concluded by describing potential solutions to remedy this situation:

“We need to confront human-reality data. The AI industry needs to understand the facts and be honest about the safety of their products. One in five women have a disability, at least one in three people aged 50 to 64 will have a disability, regardless of ethnicity. […] Start to look at what you do factor into the machine learning process — if what you’re doing works for the outliers, it works for everyone in the middle.”

The importance of localized AI governance

Following this discussion, Julia invited Stefaan Verhulst to give a brief overview of his lecture on AI Localism. AI Localism, a term coined by Stefaan Verhulst and Mona Sloane, refers to the actions taken by local decision-makers to address the use of AI within a city or community. Stefaan described how cities around the world are increasingly taking a leadership role in designing and implementing governance mechanisms to ensure the responsible use of AI:

“Cities these days are seen as the lab of experimentation in the AI governance space that can inform what actually happens at the national level. […] We have seen that cities have stepped up and are developing their own principles; for instance, Montreal has their own kind of AI principles, as well as Amsterdam.”

Stefaan then discussed important ways in which cities are re-imagining the governance of AI. Citizen, or data, assemblies for example, help engage citizens and residents in matters related to the use of their data and the development of new technologies. AI literacy programs can help demystify and educate residents about the responsible use of AI. Some cities have designated committees or task forces, Stefaan said, to convene broad interest groups in the use and development of technologies.

However, Stefaan also noted that AI Localism does not come without certain challenges and risks:

“If every city is developing their own policies, then we may see massive fragmentation on how technologies are being developed. We may also see capture by vendors. And that’s related to the other challenge that cities have, which is capacity. Do they have the capacity to really steer this beyond listening to vendors on what should happen, and what’s the best way forward.”

Ethical AI and skill development in business settings

Next, Julia invited Christoph Luetge to elaborate on his lecture Will the Market Deliver? A Business Ethics Perspective to AI?

Businesses, Christoph said, need to consider ethical risks associated with the use of AI. These, if not considered early on, could have larger consequences.

“It will not be enough to have an AI ethics philosophy, and put it on a poster on your wall. Or simply just appointing someone as the Chief AI officer, which might well be a good thing to do, is not sufficient.”

Christoph stressed the need to integrate tech ethics skill development and training within teams, and designate roles or functions that can speak to the ethical and legal implications of using AI and other data-driven technologies within business settings.

“There might also not be just one global answer to these questions,” Christoph cautioned. “We need to draw perspectives from the global community, and might end up with a pluralistic set of answers.”

Next steps: Inter-personal training, industry-wide perspectives, and building top-down, ethically conscious business cultures

To conclude, Julia re-framed a few questions from the audience. She invited Susan, Stefaan, and Christoph to answer: What is the scope of this problem? How broadly or narrowly do we want to interpret AI?

Susan responded first, emphasizing that we need to focus on how much agency we give to AI-powered technologies, and on the level of human oversight they receive. There is also a need for greater personalized conversation on these issues, Susan said:

“I argue that many of the developers, many of the influencers, the kings and queens of the biggest companies out there, I don’t think they’ve ever met a person with a disability, they just don’t have the personal confidence to position this as a professional, and economic, and ethical priority.”

Next, Stefaan commented on the need for broad industry-wide perspectives on AI. “It is important to actually have industry associations be part of this,” he said. “If you leave it up to just a company, and not have an industry-wide kind of perspective, then again, you might have a limited kind of intervention that might not be as successful as one would hope across the industry.” Stefaan also emphasized the need to proactively, instead of reactively, develop AI policies and practices that benefit society at large.

Lastly, Christoph spoke about the need to institute a top-down ethically conscious culture among businesses:

“There have been a lot of changes in corporate culture recently; including managers’ training. However, top managers are signaling that they are still behind this approach. There is a lot of communication to be done, and the role of the whistleblowers needs to be strengthened.”

--

--