Building Knowledge about Generative AI with Mobile Populations

Petra Molnar
Berkman Klein Center Collection
7 min readNov 2, 2023

--

Border wall along El Camino Del Diablo, the Devil’s Highway, stretching away, desolately, into the distance.
Border wall along El Camino Del Diablo, the Devil’s Highway. Photo by Petra Molnar, February 2022.

Like a wound in the landscape, the rusty border wall cuts along Arizona’s Camino Del Diablo, the Devil’s Highway. Once the pride and joy of the Trump Administration, this wall is once again the epicenter of a growing political row. President Biden’s May 2023 repeal of the Trump Administration’s Covid-era policy of using Title 42 comes with the introduction of new hardline policies preventing people from claiming asylum in the United States, undergirded by a growing commitment to a virtual smart border extending far beyond its physical frontier.

Racism, technology, and borders create a cruel intersection. From drones used to prevent people from reaching the safety of European shores, to artificial intelligence (AI) lie detectors at various airports worldwide, to planned robodogs patrolling the US-Mexico border, people on the move are caught in the crosshairs of an unregulated and harmful set of technologies. These projects are touted to control migration, bolstering a lucrative multi-billion-dollar border industrial complex. Coupled with increasing international environmental destabilization, more and more people are ensnared in a growing and global surveillance dragnet. Thousands have already died. The rest experience old and new traumas provoked and compounded by omnipresent surveillance and automation.

What do new tools like generative AI mean for this regime of border control?

I have spent the last five years tracking how new technologies of border management — surveillance, automated decision making, and various experimental projects — are playing out in migration control. Through years of my travels from Palestine to Ukraine to Kenya to US/Mexico, the power of comparison shows me time and again how these spaces allow for frontier mentalities to take over, creating environments of silence and violence.

In this era of generative technologies, this work is underpinned by broader questions: Whose perspectives matter when talking about innovation, and whose priorities take precedence? What does critical representation and meaningful participation look like — representation that foregrounds people’s agency and does not contribute to the “poverty porn” that is so common in representations coming from spaces of forced migration? And who gets to create narratives and generate stories that underpin the foundations of tools like GPT-4 and whatever else is coming next?

A grid of six images showing: high-tech refugee camp on Kos Island in Greece; surveillance tower in Arizona; two women cross the Ukraine-Poland border; memorial site in the Sonora desert; protest against new refugee camp on Samos; Calvin, a medical doctor, holds keys from his apartment in Ukraine after escaping across the Hungary border.
Clockwise from top left: High-tech refugee camp on Kos Island in Greece; surveillance tower in Arizona; two women cross the Ukraine-Poland border; memorial site in the Sonora desert; protest against new refugee camp on Samos; Calvin, a medical doctor, holds keys from his apartment in Ukraine after escaping across the Hungary border. Photos by Petra Molnar, 2021–2022.

Tools like generative AI are socially constructed by and with particular perspectives and value systems. They are a reflection of the so-called Global North and can encode and perpetuate biases and discrimination. In August of this year, to test out where generative AI systems are at, I ran a simple prompt through the Canva and Craiyon image generation software: “What does a refugee look like?”

Grid of Craiyon-generated images of “refugees” dominated by forlorn and emaciated faces of Black children.
Grid of Craiyon-generated images of “refugees” dominated by forlorn and emaciated faces of Black children and women, some wearing headscarves.
Grid of Canva-generated images of “refugees,” dominated by vaguely Middle Eastern people smiling in expectation of being rescued.
Grid of Canva-generated images of “refugees,” dominated by vaguely Middle Eastern people smiling in expectation of being rescued.

What stories do these images tell? What perspectives do they hide?

It is telling that for generative AI, the concept of a “refugee” elicits either forlorn and emaciated faces of Black children or else portraits of doe-eyed and vaguely Middle Eastern people waiting to be rescued. When I sent these depictions to a colleague who is currently in a situation of displacement and identifies as a refugee, she laughed and said, “I sure as hell hope I don’t look like this.”

Generative AI is also inherently exploitative. Its training data are scraped and extracted often without the knowledge or consent of the people who created or are in the data. Menial tasks that allow the models to function fall on underpaid labor outside of North America and Europe. The benefits of this technology do not accrue equally, and generative AI looks to replicate the vast power differentials between those who benefit and those who are the subjects of high-risk technological experiments.

How can we think more intentionally about who will be impacted by generative AI and work collaboratively–and rapidly–with affected populations to build knowledge?

The production of any kind of knowledge is always a political act, especially since researchers often build entire careers on documenting the trauma of others, “stealing stories” as they go along. Being entrusted with other people’s stories is a deep privilege. Generating any type of knowledge is not without its pitfalls, and academia is in danger of falling into the same trap with generative AI research: creating knowledge in isolation from communities, failing to consider the expertise of those we’re purporting to learn from. How can researchers and storytellers limit the extractive nature of research and story collection? Given the power differentials involved, research and storytelling can and should be uncomfortable, and we must pay particular attention to why certain perspectives in the so-called Global North are given precedence while the rest of the world continues to be silenced. This is particularly pertinent when we are talking about a vast system of increasingly autonomous knowledge generation through AI.

The concept of story and knowledge stewardship may be helpful, a concept from Indigenous learnings which recognizes that the storyteller is not exempt from critical analysis of their own power and privilege over other people’s narratives and should instead hold space for stories to tell themselves. This type of framing continually places responsibility at the center (see for example the work of the Canadian First Nations Information Governance Centre). Storytelling and sharing is also a profound act of resistance to simplified and homogenized narratives, often common when there is a power differential between the researcher and their topic. Established methods of knowledge production are predicated on an outside expert parachuting in, extracting data, findings, and stories, using their westernized credentials to further their careers as the expert.

True commitment to participatory approaches requires ceding space, meaningfully redistributing resources, and supporting affected communities in telling their own stories. And real engagement with decolonial methodologies requires an iterative understanding of these framings, a re-framing process that is never complete. By decentering so-called Global North narratives and not tokenizing people with lived experience as research subjects or afterthoughts, researchers can create opportunities that recognize their privilege and access to resources — and then redistribute those resources through meaningful participation, creating an environment for people to tell their own stories. It is this commitment to participatory approaches that we need in generative AI research, especially as it meets up with border control technologies.

Headshots of Veronica Martinez, Nery Sataella, Wael Qarssifi, Simon Drotti, and Rajendra Paudel, captioned by “Meet Our 2022–2023 MTM Fellows.”

One small example is the Migration and Technology Monitor project at the York University’s Refugee Law Lab, where I am Associate Director. , Migration and Technology Monitor is a platform and an archive with a focus on migration, technology, and human rights. Our recently launched fellowship program aims to create opportunities for people with lived experience to meaningfully contribute to research, storytelling, policy, and advocacy conversations from the start, not as an afterthought. Among our aims is to generate a collaborative, intellectual, and advocacy community committed to border justice. We prioritize opportunities for participatory work, including the ability to pitch unique and relevant projects by affected communities themselves. Veronica Martinez, Nery Sataella, Simon Drotti, Rajendra Paudel, and Wael Qarssifi are part of our first cohort of fellows from mobile communities from Venezuela to Mexico to Uganda to Nepal to Malaysia. Our hope is that our fellowship creates a community which provides spaces of collaboration, care, and co-creation of knowledge. We are specifically sharing resources with people on the move who may not be able to benefit from funding and resources readily available in the EU and North America. People with lived experiences of migration must be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement, such as using generative AI to compile resources for mobile communities.

Participatory methodologies that foreground lived experience as the starting place for generating knowledge inherently destabilize established power hierarchies of knowledge production. These approaches encourage researchers and tech designers to critically interrogate their own positionality and how much space their own so-called expertise takes up in the generation of knowledge at the expense of other realities. These framings and commitments are paramount, especially in context with fraught histories and vast power differentials–for example, where mobile populations are the abject and feared others and where generative AI models learn on these realities. Especially pertinent for scholars, technologists, and researchers who are themselves part of the so-called Rest of World, a re-imagination of expertise and knowledge must come from the ground up and any tools which are created must recognize and fight against these power differentials.

It is through participatory methodologies that we may come a step closer towards seeing a world in which many worlds fit, a phrase which as my BKC colleague Ashley Lee reminds us, comes from the Zapatista Indigenous resistance movement — a world where “nothing about us without us” moves beyond an old community organizer motto towards a real commitment to participation, story stewardship, and public scholarship which honors and foregrounds lived experience.

Thank you to Madeline McGee for her suggestions which greatly improved this piece and to Sam Hinds for her careful edits.

This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.

--

--

Petra Molnar
Berkman Klein Center Collection

Petra Molnar is a lawyer and anthropologist specializing in migration, technology, and human rights at York University and Harvard's Berkman Klein Center