Global Perspectives on AI Ethics Panel #2: Moving beyond data regimes, collective approaches to data, and deriving contextual standards and policies for AI

Aditi Ramesh
Mar 29 · 5 min read

AI Ethics: Global Perspectives is a free, online course jointly offered by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering, The Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and the TUM Institute for Ethics in Artificial Intelligence (IEAI). It conveys the breadth and depth of the ongoing interdisciplinary conversation around AI ethics. The course brings together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

On Monday, March 22, 2021, the Global Perspectives on AI Ethics panel series featured a discussion with two AI experts:

Moderated by Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab and Julia Stoyanovich, Director of the Center for Responsible AI at NYU, Professors Paterson and Hudson discussed collective approaches to data and data rights and the need to move beyond data protection regimes and assume collective responsibility for harms caused to consumers. They also considered how education systems might foster new conceptions of technology.

Collective approaches for the use and access of data

Maui Hudson opened the panel by describing his work on Indigenous data sovereignty in New Zealand with the Maori population. Founded on the idea that data should be subject to the laws in its nation, indigenous data sovereignty seeks to restore data rights to indigenous communities and re-balance power dynamics in the citizen-state relationship.

“[In New Zealand] data sets were being shared by the government with groups outside of the government itself — commercial users, multinational corporations. They were starting to provide services that could make use of big data. And that raised concerns not only for the public at large, but also indigenous communities — about how [corporations] might then make decisions or provide services based on how they analyzed and interpreted information that could have a material effect on the sorts of resources that indigenous communities might be able to get access to.”

Later in the panel, Maui presented one potential remedy to the issues mentioned above — a collective approach to data and data rights. This approach involves enshrining data rights for a community or group of people, releasing the burden placed on individuals to make informed decisions about the use of and access to their data.

Moving beyond data protection regimes

In her opening remarks, Jeannie Paterson described how poor data collection practices — specifically, the use of consumer data for targeted advertising — restrict consumer choices and their ability to decide how others use their data.

“If I am only [shown] certain types of products and things, then my worldview is somehow narrowing,” said Jeannie. “My capacity to exercise choices is narrowing and there’s a danger […] that each of us just becomes some sort of replica of the data that’s been created about us.”

Stefaan then reflected on the use of predictive analytics to target or exclude certain groups from participation in the marketplace.

Jeannie echoed this statement, adding that In many neoliberal democracies, where essential services, such as housing and medical services, are provided through the market, the irresponsible use AI and other technologies can give rise to discrimination:

“If we’re using predictive analytics to decide who can repay a loan, who can pay their electricity bill, who is a good tenant, then we’re opening the possibility to widespread discrimination.”

Jeannie wrapped her opening remarks by describing how current data protection regimes, such as the European Union’s General Data Protection Regulation, don’t account for the issues mentioned above. Many current data protection laws take a consent-based approach, in which the burden is placed on users to make decisions about their data, which is regularly used for AI development. However, as a society, Jeannie noted, we don’t fully understand the implications of giving away our information, making it difficult for users to make conscious and informed decisions about their data. Moreover a regulatory focus on individual choice distracts attention from the potential responsibilities of data processors for fair uses of data, in a manner consistent with reasonable social expectations.

Deriving contextual standards and policies for AI

After these introductions, Julia raised the contextual nature of ethics, and the lack of a consistent norm across regions, cultures, and jurisdictions. She then asked panelists about how they would unify standards and policies for AI across varying contexts.

Jeannie spoke first about the lack of consistent interpretation in ethical frameworks and policies for technology. She raised the importance of initiatives such as AI Localism, which seeks to guide local decision-makers to address the use of AI within their city or community. Projects such as these move away from one-size-fits all frameworks, and give individuals the opportunity to contribute to conversations about technology and the roles it should play in our lives.

“The more we have education and conversation de-romanticizing the power of AI, the more likely we are to develop the complex networks of understanding and ethics” that are needed to guide safe, fair and equitable uses of these technologies.

Looking ahead: moving towards the ethical use of AI

The panel concluded with a question directed toward Maui on governmental and non-governmental initiatives to democratize AI within Maori communities. A participant from the audience asked if AI is perceived as an opportunity for higher inclusivity.

Maui stressed the importance of capacity building in communities. “Impact is going to depend on whether or not you can build the capacity in those communities to make these [technologies] useful for their purposes,” he said.

Jeannie echoed these remarks. She also emphasized the need for education around the implications of AI.

“I think it’s really important that kids from the ground up understand how technologies work, that there are tools that can be used for good, but also can be used to discriminate and harm people, so that [children] are confident enough to ask questions and challenge the outputs of technology. If we think it’s magic, we probably don’t challenge it.”

Maui concluded by arguing that while education is a necessary long-term goal, people collectively need to assume more responsibility for the harmful implications of data and AI on society.


We will be releasing new modules early April at To receive monthly updates to the course, please register using the form here.

Data Stewards Network

Responsible Data Leadership to Address the Challenges of…