Data and AI Development in the Global South:
Ethical Practices, Regulations, and Challenges in India, Sri Lanka, and Latin America
By Vinay Narayan (AAPTI Institute), Merl Chandana (LIRNE Asia), and Gaston Wright (Civic Compass). Edited by Cat Cortes (Open Data Charter).
“Data and AI in development challenges” was the event that inspired this blog. It was held in Taipei on 26th February 2025 at RightsCon. The conversation highlighted the writers’ projects set in different countries or regions in artificial intelligence and/or data, which have human rights and ethical practices at their center. Our panelists were asked to extend the discussion in this blog.
📖 First, we asked each to share a background and key outcomes of their work.
AI in India — Vinay Narayan (AAPTI Institute)
Aapti Institute has been incorporating an intersectional systems approach coupled with a value chain analysis in understanding AI development at large. In an Indian context, we have explored the impact of AI deployment by business on the human rights of consumers and workers. This work, focused on finance, healthcare, retail and gig work, highlights human rights impacts from specific AI deployment and algorithmic mediation. We’ve also developed toolkits that businesses in these sectors can use to mitigate and address these harms.
Key Outcomes
We address the value of building AI systems with inclusivity, integrity, fairness and trust at the forefront. A specific area of focus has been on skilled work, where we have looked at the impact of Generative AI systems on video developers — setting out the gains it has brought to those in the industry as well as the ways it has detrimentally impacted them.
We are deeply engaged in understanding the role of data work and data communities that support such AI systems. We hope to highlight the key role workers play in the AI supply chain, the notorious working conditions they face, and measures that can be taken to make their involvement more equitable, given the massive value they add.
In exploring the governance of AI from a macro lens, we placed a critical digital infrastructure framework to the AI value chain to better examine gaps in the governance of AI. This helped surface issues around misplaced priorities in governing AI, especially around the environmental impact. Crucially, our work has shown the need for bottom-up initiatives of governance and solidarity in the context of AI. We have also attempted at instilling collective action in AI governance on a geopolitical level.
AI Regulation in Latin America — Gaston Wright (Civic Compass)
Civic Compass’ research* employed a mixed-methods approach to analyse the perceptions of diverse stakeholders — including policymakers, industry leaders, civil society representatives, and academics — regarding AI regulation in Latin America.
Key Outcomes
We conducted in-depth interviews with 70 stakeholders across Mexico, Argentina, Chile, Colombia, and Brazil, supplemented by targeted surveys, to capture regional perspectives on AI governance’s ethical, legal, and socio-economic challenges. This methodology enabled us to explore how national contexts shaped attitudes toward regulation while revealing shared priorities and divergent concerns among key regional actors.
Through thematic and statistical analysis of the stakeholder data, we identified emerging trends, regulatory gaps, and policy opportunities tailored to Latin America’s unique landscape. By collaborating with local experts and institutions, we grounded our findings in regional realities while drawing comparative insights from global AI governance frameworks.
The study’s outcomes provided evidence-based recommendations for adaptive, inclusive AI policies that balanced innovation with societal needs. By centering perspectives from five major economies, our research contributed to the discourse on equitable and context-aware AI regulation in Latin America.
*This study is currently available in Spanish only.
AI Governance in Sri Lanka — Merl Chandana (LIRNEasia)
As part of the Data, Algorithms & Policy team at LIRNEasia, I’ve been working at the intersection of AI governance, public interest technology, and policy implementation.
Key Outcomes
A key part of this work has been contributing to the Global Index on Responsible AI, a multi-country effort led by the Global Center on AI Governance to evaluate how effectively countries are translating principles of responsible AI into practice through regulation and other action. The Index draws from nearly 140 countries and gives us a rich, comparative understanding of not just what regulations exist in different countries, but how they’re being applied — and where gaps in capacity, participation, and meaningful accountability remain, particularly in the Global South.
In Sri Lanka, we’ve also been involved in shaping the country’s emerging AI ecosystem by supporting the development of the National AI Strategy. Our emphasis has been on ensuring that policy frameworks are grounded in local realities — not just replicating global blueprints, but responding to specific development needs and institutional contexts. In our view, premature, sweeping AI regulation can be as damaging as no regulation at all. Regulation alone doesn’t guarantee responsible AI; it must be part of a broader rights-based framework that considers the societal implications of AI across its entire lifecycle — from design to deployment to long-term impact.
🧩 Then, we asked them to expand Nati’s question: “What can we do to advocate for the same set of ideas and principles to ensure ethical practices, if regulations are not yet in place?”
Vinay: AI development and deployment primarily happens in the Global North, on data that is more reflective of the Global North than it is of the Global South / Global Majority. The subsequent deployment of these systems in the Global South then brings up issues related to the accuracy of these systems, their ability to understand local context and nuance and actually work meaningfully for populations in the global majority. A primary way to overcome this challenge is to train these systems on data from the global majority.
This however, is easier said than done. For starters availability of data, and good quality data especially, can be an issue. Furthermore, the data will tend to reflect systemic issues in global majority contexts that have only intensified in the digital age, such as the income disparity and gender-based issues — although this is not an issue specific to the global majority as it is faced by marginalised groups world over. To top all of this, the lack of strong data protection regulations mean that data collection and use for training AI systems is not necessarily a good thing for many communities, especially the most vulnerable who can be targeted with these systems.
There is, in addition to the above, a strong notion of techno-colonialism with AI. These are systems that are trained on data produced by communities in the global majority that are then deployed in their contexts for the enormous benefit of corporations situated in the Global North. The benefits that communities get for having their data hoovered up to create these systems is, in many cases, negligible or non-existent. One of the key aspects of techno-colonialism in the context of AI is the role of human labour from the global majority. Data labelling for AI is a massive industry that relies on human labour from the global majority in a very extractive way. Often, this is work that is underpaid, done in difficult working conditions, in a very atomised manner and with little social protections.
Without a doubt, AI can provide massive benefits to communities in the global majority. However, the current approach of the AI economy is rife with concerns and issues, and there is a strong need for action that recognises and protects the interests of the global majority and does not treat them as cheap labour or a testing bed for AI systems.
Gaston: The study on AI regulation in Latin America revealed diverse stakeholder perceptions across five key countries (Argentina, Brazil, Chile, Colombia, and Mexico). Interviews with 70 participants from sectors like civil society, government, tech platforms, and entrepreneurship highlighted a cautious optimism toward AI’s potential to drive efficiency and innovation in areas like healthcare and education. However, significant concerns emerged about ethical risks, including algorithmic bias, labour displacement, and privacy violations. Stakeholders emphasised the need for adaptive regulatory frameworks that balance innovation with human rights protections, cautioning against copying foreign models like the EU’s GDPR without local adaptations. The research also identified tensions between advocates of strict regulation (e.g., civil society) and proponents of flexible, innovation-friendly approaches (e.g., tech entrepreneurs).
A key finding was the region’s institutional unpreparedness to regulate AI effectively, compounded by disparities in digital access and technical capacity. Stakeholders agreed on the urgency of multisectoral collaboration to address challenges like misinformation and algorithmic transparency. While platforms like X (Twitter) were flagged as hotspots for hate speech and disinformation, participants noted AI’s dual role in exacerbating and mitigating these issues. Our study underscores the importance of context-sensitive policies prioritising equity, democratic governance, and inclusive participation, positioning AI as a tool for social progress rather than exclusion in Latin America.
Merl: In many countries in the Global South, we’re witnessing a growing interest in AI strategies and governance, but this often coexists with limited institutional readiness to regulate AI meaningfully. At LIRNEasia, our work has shown that before rushing into large-scale regulatory reforms or major AI investments, there’s real value in starting small — piloting responsible AI practices through real-world use cases. These can serve as learning opportunities to uncover institutional gaps, test governance frameworks, and identify what kind of regulatory interventions are actually needed.
One of the key lessons we’ve learned is that building capacity for responsible AI is not just about legal frameworks, but about cultivating the right ecosystem conditions. That includes strengthening public sector capacity to ask the right questions, equipping developers with tools to embed rights-based considerations in design, and creating feedback loops where civil society can flag when things go wrong. Responsible AI emerges through practice — when different actors across the AI lifecycle are empowered, accountable, and working in sync.
📝 Finally: 3 Takeaways — One from Each Speaker.
1. A Push for Digital Sovereignty and Community
Vinay: A growing response to the nature of AI development and deployment has been to push for a certain vision of digital sovereignty or AI sovereignty. The problem with this thinking is that it largely ignores the vastly interconnected supply chains that underpin the AI value chain. It is very difficult to exert sovereignty to the benefit of any one individual nation in such a context. An approach that might be better suited is be more mindful in how these systems are deployed in local contexts, identifying what can be done to spur innovation in a way that works for these contexts and building resilience.
One of the things we believe is crucial, in the frame of building resilience and solidarity, is focusing on bottom-up action and empowering communities. To this end, finding frameworks that can provide communities with greater agency and control over their data and their ability to use technologies for their benefit, while also ensuring that technologies are not extractive of them, will be critical. Glaze and Nightshade are great examples of bottom-up technological action that can help communities, artists in this instance, protect their interests. Similarly, the Kaitiakitanga License is a great example of legal action that communities have taken to honour their culture and history, while allowing for data to be used to build tools that can serve them in a manner that is compliant with their interests. Intentional and informed top-down governance that enables and works in tandem with meaningful bottom-up action is the only way to build an AI economy that works for all of us.
2. Know How to Engage Different Stakeholders
Gaston: To effectively engage diverse stakeholders in the regulation of artificial intelligence (AI) in Latin America, it is essential to acknowledge and address a widespread gap in understanding — particularly among regulators — about what AI is and, more importantly, what exactly is being regulated. As highlighted in the study, many actors — including policymakers — often lack the technical expertise to grasp the implications of AI systems, leading to confusion and inertia. Therefore, a successful approach must begin with clear, accessible explanations tailored to each group’s background, using sector-specific examples to illustrate how AI impacts health, education, justice, and labour — explaining the “why” behind regulation — not as a barrier to innovation but as a safeguard for rights, equity, and democratic accountability — is fundamental to building buy-in across sectors.
At the same time, engagement strategies must reflect the distinct concerns, motivations, and levels of familiarity that each stakeholder group brings. Entrepreneurs and tech platforms prioritise flexibility and innovation; civil society and philanthropic actors focus on ethics, social justice, and inclusion; journalists worry about misinformation; and policymakers are torn between optimism and institutional limitations. A one-size-fits-all message will fail. Instead, the approach should combine sector-specific dialogues with multisectoral forums, supported by a “translation layer” capable of bridging the technical-regulatory divide. Framing regulation not as a constraint but as an enabler of safe, equitable, and context-sensitive AI is key to mobilising collaborative action in a fragmented ecosystem.
3. Grounding responsible AI in practice through small-scale cases and inclusive communication
Merl: Regulation alone doesn’t ensure responsible AI — especially in the Global South, where institutional capacity and data ecosystems are still developing. Premature or poorly contextualized regulation risks entrenching harms or missing the mark entirely. Instead, grounding responsible AI in practice — through small-scale, implementation-focused use cases — offers a more sustainable path forward. These real-world experiments help expose policy gaps, build local capacity, and demonstrate what responsible AI looks like in action.
But to turn principles into practice, how we communicate about AI matters just as much as what we regulate. The current discourse is often fragmented by jargon and abstraction, making it inaccessible to many of the people AI is supposed to benefit. Effective, inclusive communication becomes the glue that links regulation, practice, and accountability. Responsible AI is not a single policy fix — it’s an ongoing, ecosystem-wide commitment to learning, iteration, and shared understanding.
The original event, “Data and AI in development challenges” that inspired this blog was held in Taipei during RightsCon and organised by ODC’s Research Director, Renato Berrino Malaccorto and moderated by our Executive Director, Natalia Carfi. We were joined by speakers Merl Chandana (LIRNE Asia), Vinay Narayan (AAPTI), and Gaston Wright (Civic Compass).