How to ‘Do Good’ with AI and Avoid Doing Harm… 7 Actions for Big Tech, Governments and Civil Society

The Machine Race by Suzy Madigan
11 min readAug 13, 2023

--

Often performative, the colourful drama surrounding AI debate is more evocative of Mexican lucha libre wrestling than computer science. “In the pink and gold corner, we have AI as An Existential Threat to Humanity!” Riotous boos. “And in the green and silver corner, we have AI as The Panacea to Global Challenges from Climate Change to Conflict!” Deafening cheers.

I examined the furore about existential risk in the last TMR blog. Now, in the first of two articles looking at AI and ‘doing good’, I’ll examine the crossover with emergency relief, tackling poverty, and human rights work — my own wheelhouse — and propose a fresh model for artificial intelligence.

Lucha libre. Holy Freedom merchandise. Photo credit: Suzy Madigan

As technologists race to build AI systems, a new world architecture is rapidly being constructed. AI’s significance as tools that should become common goods means that tech companies, policymakers, and Global South civil society must urgently work together. We need a new model for AI design, deployment and governance — one which actively promotes the leadership of people in the Global South in this societal reordering.

A model that is radical in its simplicity, because it focuses on sustained human-to-human conversations, where technologists are exposed to the lived experiences of people with very different backgrounds to their own. A model that celebrates the exchange of diverse knowledge and skills — a fundamental process if AI is to meet its idealistic ambitions. A model which prioritises a redirection of funding, tech skills and dialogue platforms so that people in the Global South can determine what their AI future looks like.

Civil society’s critical contribution to AI development

Because it is self-evident, it is already a cliché to say that AI systems are transforming the way many of us communicate, work, learn, access social goods, conduct war and perceive information. AI is entering every area of human life and each domain civil society works across, from climate action to health, education, agriculture, social protection, and even humanitarian response itself.

Civil society organisations (CSOs) from the Global South should be enabled to play a critical leadership role in shaping the direction of emerging technologies. AI can feel overwhelming, but most people within civil society do not need to be an expert in complex AI technology — only understand AI’s potential social impacts. The value CSOs offer is their own expertise from working with communities.

7 reasons why a new AI ecosystem model is needed, and how to get there

  1. ‘Doing Good’ isn’t as easy as it sounds

For AI optimists such as the CEO of Open AI, the company behind ChatGPT, an AI dominated world will reduce human need “if we as a society manage it responsibly.” Meta’s goal on AI is to create products “that benefit all people” while Google Deepmind wants AI to address some of the world’s most pressing problems.

If this is sounding familiar, look no further than the vision statements of many international NGOs (INGOs) working in disaster relief and poverty reduction. These are commendable ambitions. And if you’re an aid worker from the Global North (like me) and you’re shifting in your chair, it might also remind you of the uncomfortable truth that INGOs are now accepting — in our eagerness to pour benevolence on the Global South, often to address problems to which our countries have contributed, we’ve been operating a colonial model for decades.

The international humanitarian sector is finally realising (hearing?) that we need to give up power and funding to affected communities to define their own priorities and locally-led solutions. Initiatives like Pledge For Change are trying to accelerate this shift. But it’s not only an issue of rights, fairness and respect — ‘solutions’ dreamed up elsewhere just won’t be as effective, however well-meaning.

So too with AI design and deployment.

What to do #1: To avoid replicating the mistakes of the colonial past and present, we must urgently build bridges connecting Global South communities to the tech companies designing and deploying AI, and the policymakers framing its governance. Civil society, the private sector and governments from the Global South must hold leadership positions across the multi-layered settings where the governance, design, deployment, and use of AI systems is being decided. That’s in the private sector, political, academic and civil domains.

International NGOs and the UN, remaining conscious of their own colonial history, can help catalyse these relationships, amplify Global South voices (by getting them in the room), and advocate for Global South leadership.

Time to build bridges. Photo credit: Suzy Madigan

2. Contexts are different outside San Francisco, Silicon Valley, London and Europe…

Right now, AI systems, and plans to govern them, are being overwhelmingly created in the Global North without meaningful, ongoing participation of people in the Global South. This raises multiple risks that civil society can help tech companies address if they truly aim to build inclusive, ethical, safe AI that benefits all.

Effective aid workers know that real social impact has to be locally-led, informed by an analysis of root causes, and the political, economic, and social dynamics moulding a context. Responders also need to analyse individuals’ varying experiences within that context, influenced by attitudes to their age, gender, ethnic background and more.

Smart engineers working in Global North AI development labs are skilled at what they do. Yet, understanding the challenges of persecuted minorities in authoritarian states or the diverse experiences of the world’s 108 million forcibly displaced people isn’t their expertise. AI designed out of context, by people lacking insight and first-hand experience of a situation, risks being irrelevant at best, harmful at worst — just like a one-size-fits-all humanitarian programme transplanted from a Yemen refugee camp to a Ukrainian city.

AI systems need to be contextually and culturally informed, trained on local datasets, and respond to localised priorities.

Skies over Odessa, Ukraine. April 2023. Photo credit: Suzy Madigan

What to do #2: Global investment must flow to the Global South fostering talent and enterprise, so those countries can develop their own technologies rooted in local realities.

Simultaneously, tech companies must include funding within AI development budgets to establish equitable partnerships with civil society actors. CSOs can share local insights and feedback, and approaches like those that uncover the gendered impacts of poverty and disasters, political economy analyses, or conflict prevention methodologies.

An intersectional gender analysis of the 2020 Beirut Port Explosion by local CSOs, CARE International and the UN helped design a tailored humanitarian response. Photo credit: Suzy Madigan

3. Overcoming bias — Building systems that work for different communities

It is widely acknowledged that AI systems contain bias. This must be addressed if AI applications are to be fair and appropriate for diverse and marginalised communities.

Whether we’re conscious of it or not, we all have bias and a world view shaped by our own experiences. That’s reflected in our work and social interactions; it’s mirrored through what appears on the World Wide Web. AI can be trained on biased and unrepresentative data such as that scraped from the internet (think generative AI like ChatGPT).

Machine learning models might rely on biased past decisions; data scientists can embed their own world views into systems. Proxies used to determine risk (e.g. of defaulting on a loan) may be socially unjust, while unrepresentative or skewed data can be both dangerous and discriminatory for predictions on health, recidivism, a right to welfare or other social goods. (See Gender Shades for trailblazing work by Timnit Gebru and Joy Buolamwini).

Bias can also come from an absence of data. Lower income countries that are often the worst affected by climate disasters can lack the data collection infrastructure needed to input into ‘AI-for-climate’ applications. This risks inequalities in how climate-relevant AI is designed and distributed.

For disaster risk management, which aims to help societies prepare for, mitigate or recover from the impacts of climate change or natural disasters, bias in geospatial data can skew models, particularly when well-meaning algorithm designers sit far from affected communities.

What to do #3: Before AI systems are relied upon for life and death decisions with mathematically persuasive predictions, input data must be interrogated (did it capture gendered nuances? Data from hard-to-reach groups?). The people who will be affected by those decisions must be involved in design processes, whether through steering committees, focus groups, or other feedback mechanisms. Human-centred AI institutes and university departments also have a key role linking with Global South universities. And again — Global South communities must be empowered to build their own technologies.

Humanitarian response to Cyclone Kenneth, Mozambique. Photo credit: Suzy Madigan

4. Navigating ethical dilemmas

Artificial intelligence poses vital ethical and safety questions which CSOs are well-placed to help navigate. Humanitarian and development actors face frequent moral dilemmas in trying to ‘do good’ in complex settings (Hugo Slim’s Doing The Right Thing remains a classic primer), from selecting recipients of aid when resources are limited to providing assistance under authoritarian regimes. Interventions carry risk, so informed risk analysis is a fundamental element of humanitarian project planning. Where programmes target vulnerable populations, particularly within fragile or conflict contexts, it’s even more critical.

What to do #4: Just as civil servants get seconded to CSOs to learn ‘in the field’, so too should technologists spend time inside CSOs to understand the real world impact of emerging technologies and their risks in different settings (convenient biometric payment systems? Not great for human rights defenders living under surveillance). Simultaneously, Global South civil society should be invited to advise on social impact within AI companies while receiving training and skills from company technologists.

Contexts are more complicated outside the research lab. Jerusalem cityscape. Photo credit: Suzy Madigan

5. The world’s unequal — the gap mustn’t widen

World inequality is at a record high — the richest 1% now enjoy more wealth than the rest of the globe combined. To prevent inequality widening further, tech companies must work with global CSOs to understand AI’s nuanced social implications.

The global digital divide is a chasm — almost 3 billion people (37% of the world) have never used the Internet. Of these people, 96% live in a low-income country. Global investment should flow to the Global South to increase citizens’ digital literacy and train homegrown technologists. The economic and social benefits of AI will continue to accrue in the north unless the structural issues that cause the Global South to lag behind the Global North in ‘AI readiness’ are addressed.

The potential for AI to drive global GDP is often touted as a primary measure of success. This requires caution. We know GDP tells a limited story — it doesn’t identify inequalities (unlike the Gini index), nor other key indicators of wellbeing, like health, opportunities, safety from violence, or autonomy over one’s life and choices (see the Human Development Index and Gender Development Index).

What to do #5: Civil society can ensure the promise of AI is rooted in reality, and measured against meaningful indicators. NGOs like CARE International host independent advisory boards comprised of civil society actors from Global South nations who shape the agenda. Tech companies could adopt this model opening dialogue with local human rights defenders, aid workers and service providers that would support them design inclusive, appropriate AI systems.

6. Mitigating electoral violence

In 2024, over 70 countries will go to the polls, more than 2 billion voters. The spread of dis/misinformation from rapidly accelerating generative AI could lead to violence if populations don’t accept election results. In already fragile contexts, the risk increases. Where there is existing conflict, or inter-community tensions, just as social media platforms have been used to incite genocide generative AI could escalate violence (the next AK47?). In many of these contexts, conflict mediation CSOs are working hard to reduce community violence.

What to do #6: Aside from the clear threat to life, instability helps no one, including the private sector operating in affected countries. Tech and other companies, alongside donor governments, should fund projects and provide advisory support to Global South CSOs to counter dis/misinformation around elections.

“Forward for a new Haiti without violence”. Peace messaging in Haiti. Photo credit: Suzy Madigan

7. A role for humanitarian and development actors

In a just world, technologists based in the Global South would be driving (beneficial) AI development at the same rate as the Global North, and structural barriers to that end need to be removed. Meanwhile, Global North tech companies should be working directly in partnership with community-based organisations in the Global South.

This is where international NGOs (INGOs) can play a critical role — as brokers and conveners, connecting tech companies and policymakers to partners and communities where they work across the globe to catalyse vital, ongoing conversations. Decolonising aid shouldn’t be a case of INGOs shutting the doors overnight on a global network of partners. It’s about solidarity, facilitating equitable partnerships, influencing wider change and amplifying marginalised voices.

What to do #7: INGOs, the UN and governments should promote dialogue platforms convening representatives from Global South civil society, policymakers and tech companies to determine concrete solutions for inclusive AI. This requires not only meaningful participation of Global South CSOs, but their leadership to determine the agenda and proposed outcomes. At this summer’s UN-hosted AI for Good Summit, where was the wide array of Global South civil society speakers? That has to change. They should be leading debate on topics that directly affect their communities.

Tech companies, governments and international organisations can support CSOs, with funding and logistics, to host and attend regular events and roundtables that establish meaningful dialogue with people from the grassroots of social change.

The way forward

Safe, inclusive AI should be regarded as a common good, notwithstanding the investment in its creation by private companies. And while governments wrangle with regulation of a constantly evolving technology, the responsibility for avoiding harm will always remain that of the actors creating and deploying AI systems.

Consulting with civil society is crucial to limit harms and share benefits, but consultation alone risks being performative, much like a lucha libre. A more radical power shift is needed with the Global South in the driving seat.

Currently, the elite nature of AI development means that the promise of ‘doing good’ is too often limited to lofty, generalised ambitions cited by Global North actors at high-level meetings and in the media. It’s time to get down to the detailed work, cross-cultural dialogue, and relationship building that is the only real way for AI to meaningfully benefit all.

Hit ‘Follow’ above to be alerted to new articles from The Machine Race blog. Share your comments, corrections and suggestions here, on LinkedIn, or on Twitter @TheMachineRace. Listen to Suzy Madigan talking about AI and society on the Standard Issue podcast and see the ‘About’ page for author’s biography. Thanks for reading.

--

--

The Machine Race by Suzy Madigan

Human rights specialist | Aid worker | Founder of @TheMachineRace | Accelerating human conversations about AI & society