The Case for Local and Regional Public Engagement in Governing Artificial Intelligence
Stefaan Verhulst (Co-Founder of The GovLab and The DataTank) and Claudia Chwalisz (Founder and CEO of DemocracyNext)
As the Paris AI Action Summit approaches, the world’s attention will once again turn to the urgent questions surrounding how we govern artificial intelligence responsibly. Discussions will inevitably include calls for global coordination and participation, exemplified by several proposals for a Global Citizens’ Assembly on AI. While such initiatives aim to foster inclusivity, the reality is that meaningful deliberation and actionable outcomes often emerge most effectively at the local and regional levels.
Building on earlier reflections in “AI Globalism and AI Localism,” we argue that to govern AI for public benefit, we must prioritize building public engagement capacity closer to the communities where AI systems are deployed. Localized engagement not only ensures relevance to specific cultural, social, and economic contexts but also equips communities with the agency to shape both policy and product development in ways that reflect their needs and values.
While a Global Citizens’ Assembly sounds like a great idea on the surface, there is no public authority with teeth or enforcement mechanisms at that level of governance. The Paris Summit represents an opportunity to rethink existing AI governance frameworks, reorienting them toward an approach that is grounded in lived, local realities and mutually respectful processes of co-creation. Toward that end, we elaborate below on proposals for: local and regional AI assemblies; AI citizens’ assemblies for EU policy; capacity-building programs, and localized data governance models.
The Rise of AI Globalism
The rise of AI has been accompanied by a simultaneous increase of interest in AI governance. Driven by a determination not to be caught on the back foot, as they were with social media and other recent technologies, regulators, policymakers and civil society organizations around the world are exploring frameworks and institutional structures to help maximize the benefits of AI while minimizing its potential harms. Amid this interest and the accompanying initiatives, there has been a notable emphasis on what we might call AI globalism.
AI globalism is driven by a recognition of the borderless nature of the technology, as well as of its likely global impacts, both positive and negative. A notable example can be found in the OECD AI Principles, adopted in 2019 and supported by around 47 countries, which seeks to set a foundation for responsible AI by promoting fairness, transparency, and accountability. The Global Partnership on Artificial Intelligence (GPAI) is another significant effort of AI Globalism, bringing together fifteen founding members (including Canada, France, Japan, the UK, the EU, and USA) to foster international cooperation on ethical AI use and best practices. While many of these efforts are driven by national governments, multilateral organizations have also intensified their focus on global AI governance. The UN Secretary General developed an AI advisory body while UNESCO has sought to advance a global ethical AI framework aimed to tackle issues such as bias, inequality, and the misuse of AI. Likewise the International Telecommunication Union (ITU) has organized forums to address the regulatory challenges posed by AI, especially in areas such as cybersecurity and cross-border data flows. These multilateral efforts seek to chart a collaborative, global and unified approach to navigating AI’s transformative potential.
The Case for Local and Regional AI Engagement
The impulse for AI globalism and a global public engagement around AI is understandable, and mostly well-intended. Yet it is also often limited. While the impact of AI may indeed be global, global institutions are limited to publishing non-binding and fairly high-level principles and frameworks and lack enforcement mechanisms to ensure adherence and implementation.
Moreover, global public engagements frequently yield recommendations that are overly broad and lack the specificity needed for practical implementation. For instance, the 2022 Global Citizens’ Assembly on Climate is cited as an example to inspire the Global Citizens’ Assembly on AI. However, looking at the output and the impact of it demonstrate the limitations of this approach. The ‘declaration’ that emerged includes seven high-level, one-sentence statements such as: “The Paris Agreement is humanity’s best chance; it must be affirmed and enforced by all governments and people, and rigorously monitored in collaboration with citizens and grassroots mechanisms.” It is hard to point to any concrete impact of this process on anything beyond it being perhaps an interesting and worthwhile experience for those involved in the conversations (which is of course valuable, but we believe there needs to be tangible impact as well).
Some of the most innovative and effective governance initiatives are emerging regionally, at the level of municipalities, states, and other sub-national jurisdictions. We call these efforts AI localism. They draw upon local knowledge,expertise, and values, showing how technology is often best governed close to the ground, in direct relation to the societies and polities where it is deployed and which it is most likely to affect. There are no shortage of examples; the GovLab’s AI Localism database alone includes 153 of them.
Examining these and various other examples of AI localism, at least four benefits of the approach can be identified:
- Contextual Expertise: By design, AI systems interact with diverse local realities. Whether in healthcare, education, or urban mobility, the impacts of AI differ drastically across regions. While often treated as a highly technical issue, it is nonetheless underpinned by values-driven dilemmas, therefore requiring deliberation by those impacted. For instance, people in one country may have a different set of values and perspectives on the use of AI in healthcare than people in another. Engaging local actors — citizens, civic organizations, and policymakers — brings nuanced perspectives that are often overlooked in global discussions. These insights are critical for ensuring that AI systems are not just technically robust but also inclusive, and socially and ethically aligned with the communities they serve.
- Building Trust through Proximity: Trust is always essential for effective governance, and it is built not through distant proclamations but through accessible, transparent, and participatory processes. In short, trust is built through proximity. For instance, local and regional forums can provide opportunities for citizens to see and influence decisions directly, fostering a sense of ownership and accountability that is harder to achieve in global assemblies.
- Obtaining a Social License: Proximity also creates opportunities to develop a social license — the informal, community-level consent and legitimacy that are essential for data and AI systems and policies to function effectively and equitably. Obtaining a social license involves more than just regulatory compliance. It requires an ongoing, iterative process of engagement by which communities feel heard and empowered to co-create the rules, norms, and applications of AI. Such a license is especially critical in contexts where the deployment of AI could reshape social fabrics, exacerbate inequalities, or challenge existing cultural norms. By grounding governance in local realities, we not only increase the relevance of AI systems but also reduce the risk of resistance, misunderstanding, or ethical breaches that undermine trust.
- Legitimate and Informed Decision-making with Citizens’ Assemblies:
Proximity and the process of acquiring a social license enable co-creation of AI systems and policies. Co-creation is the means by which diverse stakeholders collectively shape AI systems and policies, ensuring they are inclusive and context-sensitive; it allows for the integration of life experiences and local expertise into the design process, fostering trust and mutual understanding. While the term co-creation gets used a lot, to us it means having iterative feedback loops and regional deliberations in citizens’ assemblies that bring together randomly selected, broadly representative groups of people convened for ample time to become informed, weigh trade-offs, and find common ground on shared proposals. The rigour of these processes — which have citizens at their heart, but also involve a wide range of experts, stakeholders, and policy makers — is essential for legitimacy. By embedding these processes within local contexts, co-creation not only improves the relevance and ethical alignment of AI, but also empowers communities as active participants and agents in shaping their own technological futures. These locally grounded innovations can then serve as scalable models for global adoption.
Shifting the Focus at the Paris Summit
The Paris AI Safety Summit provides an opportunity to harness the positive potential of AI for genuine and positive transformation. From a governance perspective, it is vital to complement the current focus on global strategies and acknowledge the limitations of top-down approaches. Instead of investing heavily in a singular Global Citizens’ Assembly or other similar approaches, resources could be shared toward establishing and strengthening local and regional deliberation frameworks. Examples of such frameworks could include:
- Local and Regional AI Assemblies: Forums where citizens, local governments, and technical experts collaborate to address context-specific challenges and opportunities in AI deployment. This may be to shape the policies of local, national or regional government, as was attempted in Belgium, or of another institution like a university. For example, what rules should govern the use of AI in healthcare? What should be the policies about the use of generative AI by students and teachers?
- AI Citizens’ Assemblies for EU policy: Some of the most important new AI legislation is coming out of the European Union. This is going to face considerable headwinds in the coming years, requiring that citizen deliberation is central to the European Commission’s work on AI. This might entail either a series of national citizens’ assemblies harnessing a common question, or EU-wide citizens’ assemblies on new issues related to the governance of emerging technologies at EU-level.
- Capacity-Building Programs: Initiatives to train local stakeholders in AI literacy, ethical considerations, and governance mechanisms.
- Localized Data Governance Models: Frameworks that respect local cultural norms and legal systems while ensuring interoperability with global standards. For example, we are inspired by the Serpentine’s recent experiment in bottom-up collective governance of a Choral AI Dataset.
These are but some examples of the types of institutional frameworks that could foster a more local approach to AI governance. The broader vision is of a multi-layered governance model that includes both local and global perspectives. By fostering meaningful deliberations at the grassroots level, where they are connected to power and decision-making, we can ensure that global AI governance is informed by a mosaic of diverse perspectives and grounded in lived realities.
The Paris Summit should embrace this vision of AI localism — not as a rejection of globalism, but as a recognition that the path to effective and inclusive global governance begins with empowering local and regional communities. After all, the true promise of AI lies not in abstract global agreements but in its capacity to enhance lives, one community at a time.
The Paris Summit represents a window of opportunity; the world is watching to see if policymakers and others can seize the moment.
About the authors
Stefaan G. Verhulst is Co-Founder and Chief Research and Development Officer as well as Director of GovLab’s Data Program. He is also Editor-in-Chief of Data & Policy.
Claudia Chwalisz is an author, activist, and entrepreneur. She is the Founder and CEO of DemocracyNext, an international research and action institute working to shift political and legislative power to new democratic institutions that are anchored in citizen participation, sortition (representation by lottery), and deliberation — like Citizens’ Assemblies.
***
This is the blog for Data & Policy (cambridge.org/dap), a peer-reviewed open access journal published by Cambridge University Press in association with the Data for Policy Conference and Community Interest Company.