Sitemap

Building the Future Now: Why AI in the Public Sector Must Be Shaped, Not Just Deployed

7 min readMay 8, 2025

By UNDP Asia-Pacific’s Regional Innovation & Digital Team (Sriganesh Lokanathan, Aafreen Siddiqui, Alex Oprunenco)

The Paradox of AI in 2025

Across Asia-Pacific, governments face a high-stakes dilemma: they are being urged to harness the transformative potential of artificial intelligence, while still building the foundational capacity to govern it. The pressure to “do something” is real. But without confident public institutions, coherent digital ecosystems, and contextual governance, that “something” risks deepening exclusion, accelerating opacity, or outsourcing public value altogether. These tensions are echoed in UNDP’s newly released 2025 Human Development Report, which explores how AI is reshaping human development and underscores the need for institutions to deliberately govern its trajectory

At the 2025 Global AI Action Summit in France, a clear message echoed across borders: artificial intelligence must serve the public interest. Global leaders from governments, multilateral bodies, and civil society called for AI systems that are trustworthy, inclusive, and democratically governed. But beneath that consensus is a quieter truth: there is no single blueprint for how to get there. Especially in developing and emerging economies, the pathways to responsible AI are fragmented, and the risks of exclusion, dependence, or harm are real.

This is not a hypothetical concern. In many countries, AI policy is being written faster than public institutions can adapt. Tools are being piloted without plans for stewardship. Procurement is moving faster than public frameworks, and institutional readiness cannot keep up. And institutions are being asked to regulate what they barely have the bandwidth to understand.

The Need for a Different Approach

AI’s potential is vast, but it also risks amplifying inequality and centralizing control. Around the world, algorithmic systems have shown patterns of bias, exclusion, and opacity. Left unchecked, they reinforce existing power asymmetries.

Without strong public institutions, AI will be shaped by commercial and geopolitical interests, not public values. The COVID-19 pandemic exposed how many governments have hollowed out their digital capacity — outsourcing core functions, lacking cross-agency coordination, and struggling to challenge vendors.

Global frameworks like the EU AI Act and OECD principles provide guidance, but assume capabilities that don’t exist everywhere. In Asia-Pacific, where digital maturity is uneven, static templates will fall short. We need adaptive, practice-informed approaches that evolve with local realities.

And crucially, governance and implementation cannot be sequenced. Calls for strong AI regulation are well-intentioned. But insisting on perfect governance before experimentation risks stalling public sector engagement altogether. At the Regional Innovation and Digital (RID) team at UNDP’s Asia-Pacific office, we believe governance and innovation must co-evolve: institutions must experiment not despite limited oversight, but to help shape it. Embedding safeguards into experimentation allows systems to be designed, tested, and refined in context — so that governance emerges from learning, not only from abstract principles.

Public purpose won’t emerge by default. It must be designed, defended, and governed intentionally. This is the foundation of RID’s approach: not simply deploying AI, but building the institutional capability to shape it.

As Mike Bracken puts it, the real issue is not whether AI is an existential threat, it is whether we treat it as a public asset. That means deliberately stewarded, transparently governed, and embedded where it matters most: in the day-to-day of public delivery.

A Glimpse of What Could Be

In Batticaloa, a diverse and historically underserved district on Sri Lanka’s eastern coast, a frontline caseworker opens her dashboard and sees a quiet flag: a household that may be falling through the cracks of the social safety net. The AI hasn’t made a decision. It has surfaced a discrepancy in eligibility logic based on recent changes in household composition. The caseworker investigates, confirms the issue, and intervenes in time to ensure the family receives the support they’re entitled to.

This story begins not with the algorithm, but two decades earlier, after COVID-19 and the 2022 economic crisis exposed deep cracks in Sri Lanka’s social protection: fragmented data, rigid rules, and citizens lost in bureaucracy during moments of need.

Sri Lanka’s initial reforms focused on reducing fragmentation and improving access. As regional collaboration evolved, the country later drew on emerging lessons from Malaysia’s 2025 efforts in transparent, frontline-informed AI to guide pilots focused on governance and responsible tech integration.

Algorithms were introduced only after co-design with caseworkers and auditors. Each pilot was reviewed through structured feedback loops involving frontline staff, auditors, and community input — ensuring systems evolved not just for speed, but with fairness and accountability built in. Data-sharing protocols were reshaped to help agencies identify and reach excluded households, prioritizing inclusion over administrative convenience. Most critically, AI supported — not replaced — public decision-making.

By 2040, the system is faster, fairer, and more effective. Public servants remain central, aided — not undermined — by AI tools that surface insights without overriding judgment. Oversight is embedded, not bolted on. Trust is stronger because transparency and care were design principles from the start, not afterthoughts.

RID’s Response: Responsible AI as Public Capability

Having worked closely with policymakers, and understanding their challenges at RID, we don’t see responsible AI as a checklist. We see it as a public capability, built over time, in context, and through application. It can’t be done by frameworks alone or outsourced entirely. It must be practiced, questioned, owned, and continually adapted by the institutions that use it.

To guide our work, we draw on four key principles. These serve as a compass, not a prescription. We recognize that each country starts from a different place, shaped by distinct political, institutional, and resource realities. Our role is to meet governments where they are — whether through broad strategies or narrowly scoped pilots — and help nudge systems toward more inclusive, transparent, and adaptive AI.

1. Start with the problem, not the tech

We prioritize real-world problems that matter to institutions and citizens — not pre-baked AI solutions. This often means mapping institutional pain points, co-facilitating dialogues, and slowing down to ask: what are we solving, and why?

This principle applies upstream. In Nepal we are working with UNDP Country Office and government partners to co-develop a National AI Strategy rooted in national priorities and ground realities, drawing on our experience supporting strategy development in Sri Lanka.

It also shapes our implementation work. In Sri Lanka, in collaboration with the UNDP Country Office, we’re putting this approach into practice co-developing two AI use cases with national stakeholders. These are designed as testbeds for institutional and regulatory learning — grounded in problem-first framing and adaptive experimentation.

2. Design with the users, not just for them

User-centered design is essential for inclusive, effective systems. Testing ideas with frontline workers and affected communities helps build trust, relevance, and legitimacy.

In Indonesia, we helped the UNDP Country Office catalyze funding for STRIVE, an AI-enabled platform co-developed with the Ministry of Villages. It supports participatory village planning by surfacing community insights and behaviors, embedding inclusive design in real-world service delivery. In a time of eroding public trust, approaches like STRIVE can help rebuild democratic processes and institutional resilience.

3. Test, learn, and adapt — with speed and transparency

No AI system works perfectly out of the box. Iteration must be institutional, not just technical. We engage in pilot testing, shadow deployments, living labs and regulatory sandboxes to help governments learn and adjust in real time. Higher-risk applications warrant stronger safeguards — because not all deployment are equal. Crucially, public sector AI must evolve in the open. Sharing what works and what doesn’t, not only accelerates learning but also builds trust and accountability. We embrace fast, low-risk experimentation that enables quick feedback, adaptation, and scalability.

We are adapting proven practices from Data Science for Social Good (DSSG) program at Carnegie Mellon University, which has helped public agencies embed feedback loops into AI initiatives across the US and beyond. We are now applying these methods in Asia-Pacific — starting with our work in Sri Lanka. In Malaysia, in collaboration with our Country Office, we are exploring how AI can improve the inclusivity and responsiveness of social protection systems, particularly for underserved populations.

4. Build institutional muscle, not just models

The hardest part of AI is not the technology — it’s the stewardship. Strong internal capabilities, governance frameworks, and peer learning matter as much as (or maybe even more!) technical tools. Across our work, we are helping governments build not only solutions, but also the skills and systems to sustain them.

To this end, we are also growing our Digital Stewardship Community, a peer-learning network of public officials across the region, focused on building confidence and competence to ask the right questions, govern responsibly, and learn collectively. Our aim is to nurture a cohort of digital stewards who don’t just learn or delegate, but actively shape scalable, sustainable digital policies and public sector innovations.

A Shared Task, Not a Solo Act

Supporting responsible AI in Asia-Pacific requires collective effort. RID serves as a platform to align governments, development partners, technical experts, and financiers around a shared vision. We help shape pipelines of AI initiatives rooted in public value, supported when needed by international financial institutions.

Foundational layers — such as open data, compute infrastructure, connectivity, and digital public goods — require sustained, long-term investment. Many countries are still defining their pathways, and international financial institutions like the ADB, AIIB, IsDB, and World Bank are critical partners in financing and enabling this digital foundation. UNDP’s role is to help shape these efforts from the ground up — ensuring that as AI systems are introduced, they advance inclusion, responsiveness, and trust.

From Possibility to Practice

Responsible AI in the public sector isn’t a one-off deliverable for us — it’s a process of co-evolution. The principles we use are directional, not doctrinal. They help us ask better questions, spot what’s missing, and avoid repeating mistakes made elsewhere.

At RID, we’re excited and evolving as we walk this journey alongside our Country Offices in Asia Pacific and their respective government partners. And as we continue to learn, design, and deliver, we invite others to walk it with us.

--

--

UNDP Strategic Innovation
UNDP Strategic Innovation

Written by UNDP Strategic Innovation

We are pioneering new ways of doing development that build countries’ capacity to deliver change at scale.

Responses (2)