Interview with Damini Satija

EAAMO
EAAMO
Published in
6 min readNov 10, 2023

Interview conducted by Mackenzie Jorgensen and the Conversations with Practitioners Working Group

Damini Satija is a human rights and public policy professional working on data and Artificial Intelligence (AI). She is Head of the Algorithmic Accountability Lab and a Deputy Director at Amnesty Tech where she and her team research the increasing use of algorithmic systems and AI in the provision of welfare and essential services, and the harms from those systems on marginalized communities worldwide.

Our Mechanism Design for Social Good (MD4SG) working group, Conversations with Practitioners, had the pleasure of interviewing Damini. In this blog, we discuss the main insights from the interview for our community of researchers.

What does Amnesty Tech’s Algorithmic Accountability Lab work on?

Amnesty International Logo
Amnesty International Logo

The multidisciplinary lab that Damini leads at Amnesty came to fruition in response to the increasing use of algorithmic systems in welfare provision (e.g., housing, education, and healthcare). The provision of key public services is already a flawed activity which does not by default support those most in need or most reliant on these provisions, but rather operationalises policy and political goals rooted in policing the poor and enacting austerity measures. The lab’s focus is on the following questions:

  1. What does the digitization and increasing automation of the social safety net mean for the most marginalized groups?
  2. How does that digitization and automation entrench and scale existing societal inequity and discrimination?

The lab does an assortment of work ranging from research and investigative work to advocacy, campaigning, and litigation. They have a team of seven people in the lab who have a multitude of skill sets from policy, legal, data science, campaigning, and journalism backgrounds.

The lab’s work is separated into two phases over at least three years. Phase 1 focuses on the EU and more specifically on advocacy on the EU’s AI Act, the world’s first comprehensive legislation to regulate AI which is expected to be finalized at the end of this Fall. Similar to the EU’s GDPR which set the bar for data protection across the globe, the lab is expecting that the EU AI Act will set the benchmark for global AI regulation. However, the team acknowledges that their advocacy approach will need to be adaptive and responsive to external developments in AI regulation beyond the EU as well as putting forward a rights-centered agenda on what AI regulation should look like. They also have priority areas (e.g., biometric surveillance and social scoring) where they are calling for certain technology uses to be banned in the EU AI Act. Another priority area for them in the AI Act are the migrant and refugee rights. There is much contention amongst those deciding what AI systems should be partially or fully prohibited, and what transparency-related AI standards should look like. Damini and her team are also very aware that the EU AI Act is by design Eurocentric and that if it is exported to other countries it could hold a “Euro-dominance that won’t address the unique human rights concerns of every country”, as Damini argues. The lab published an explainer video for the EU AI Act (which came out after this interview was held), and Damini recommended European Digital Rights (EDRi) as a helpful resource to keep track of the EU AI Act’s progress. From an investigation point of view, they are conducting research into case studies on government use of automation in EU member states.

Image taken from: https://www.amnesty.eu/news/eu-ai-act-must-protect-all-people-regardless-of-migration-status/
Biometric surveillance. Image taken from: EU: AI Act must protect all people, regardless of migration status — European Institutions Office (amnesty.eu)

The second phase of their lab will contain a larger scope, globally this time. They will conduct similar case study investigations and research like in Phase 1 but take a more globally critical and decolonial approach, by taking into account how algorithmic welfare systems proliferate global power dynamics. In general, the Amnesty model orients research around advocacy and campaign calls so that there is a clear action plan connected with research.

How do they work with other groups to inform their research and advocacy?

Damini highlighted that her team cannot do their work in isolation. While they have expertise in human rights and algorithmic harms, they acknowledge that there is so much that they are not experts in, like the lived experience of people impacted by these algorithmic systems. As Damini argues, the lived experience is so crucial to hear because the affected people can speak to the regional and community-level social contexts as well as the political and economic contexts where these systems are deployed. The research methodologies in their case studies help them listen to impacted people. Their goal is to bring their expertise to collaborations with those with the regional and context specific expertise around economic and social justice issues. To that end,partnerships are key to their work and are a crucial part of their research. They are currently planning sessions and forums where they can listen to what people are experiencing from the digitization of welfare services and what levers of change they can use. This work helps them build more impactful strategies, advocacy, and research rather than informing those aspects of their work with their own assumptions.

What has been the hardest challenge so far in the work and as a leader of the recently formed lab?

Damini has been the head of the lab for a year and a few months (as of February 2023). The biggest challenge is grappling with her lab’s big global mandate and then figuring out where to focus. There is no small amount of problems to tackle in their scope, so the framing is tricky. She emphasizes that her lab’s placement in Amnesty is unique because this lab can really make an impact and elevate the voices of those affected by algorithmic welfare systems. Along those lines, she is aware that her team does not want to be extractive when telling the stories of impacted communities. They aim to establish themselves as a center of resource and expertise working in this space by sharing their learnings and approaches publicly, and working together with civil society groups, community organizations, and academics.

The biggest challenge since the very beginning has been figuring out how they all can speak the same language. Since many of them come from different disciplines, they might have different words for the same thing. Her team members often have different methodological processes and priorities, and thus it is crucial to bring these into conversation with each other to achieve the vision of an interdisciplinary team.

What words of advice does Damini have for researchers working on algorithmic systems?

When it comes to regulating algorithmic systems and AI, Damini would like to work with academics or see more research on articulating the red lines, taking into account legal standards, human rights, social justice, and technologies. Further, researchers in this sphere can take a more critical approach to the Global North skew in the field. Damini said that researchers studying or developing algorithmic systems should remember that

“algorithmic systems are not completely divorced from what has happened previously in technology development,” and that these systems are “a continuation of the same sort of incentive structures and business models that have led to technological development in the past.”

Damini sometimes finds that researchers tend to see algorithmic systems as a totally new thing, and that results in:

“divorcing [those systems] from the history which has produced technology in the past that has disproportionately impacted marginalized groups.”

Damini highlighted that the:

“institutional societal problems reproduced in tech…are not new and [are] a continuation from the past.”

Damini encourages researchers to build out research agendas with that lens.

We would again like to thank Damini for her amazing nuggets of knowledge, as well as the Conversations with Practitioners members for their engagement and thoughtful questions.

The February 1 2023 interview with Damini was led by Mackenzie Jorgensen, and this blog post was edited by Mackenzie, Wendy, Kristen, Sandro, Ana, and Jessie.

--

--

EAAMO
EAAMO
Editor for

EAAMO is a multi-institutional, interdisciplinary initiative working to improve global access to opportunity. Learn more at eaamo.org