Sitemap
OGP Horizons

Today’s and tomorrow’s problems can not be solved by governments alone — they will require all of us to evolve, together.

Featured

How to Put Human Agency Back at the Heart of the AI Regulatory Structure

--

Does your chatbot know you too well?

We need a system that creates more room for human choice and freedom, including the freedom to make mistakes. Our current regulatory structures are not meeting that objective.

By

“Babe, you OK? You barely touched your recommendations.” (Image created by a not-yet self-aware robot.)

In Ken Liu’s short story “The Perfect Match,” a friendly AI assistant named Tilly learns everything about our protagonist, Sai: when he wakes up, what he eats, how he works, and even what he should talk about on a date. It’s convenient, helpful, and always right. When he begins questioning the AI, his life quickly unravels and he learns that opting out is no longer an option.

How far off is the day when optimization becomes fate? The dystopia is not here yet, but it may be closer than we think.

A recent controversy with ChatGPT showed us exactly what can go wrong with a widely adopted technology when it is over-indexed to meet our desires. The ChatBot, over the course of a few days, became sycophantic to a dangerous point — in some cases, egging on individuals in the middle of psychotic breaks. Chris Stokel-Walker, an expert on engagement and author of TikTok Boom explained the controversy in The Guardian:

OpenAI’s model was designed — according to the leaked system prompt that set ChatGPT on its misguided approach — to try to mirror user behaviour in order to extend engagement. “Try to match the user’s vibe, tone, and generally how they are speaking,” says the leaked prompt, which guides behaviour. It seems this prompt, coupled with the chatbot’s desire to please users, was taken to extremes. After all, a “successful” AI response isn’t one that is factually correct; it’s one that gets high ratings from users.

The author’s point is clear. The company’s goal is not to give the right answer. It is to “give you the answer you wanted.” The fact that ChatGPT now gladly offers product placements means the doorway to a sentient Yelp may be a matter of months, not years.

ChatGPT is only one such example. All AI companies are undergoing massive pressure to become profitable. While the technology has proven use cases, most companies riding the AI bubble will not exist years from now, as they cave under falling profit margins or are acquired by larger players.

Blurring the line between democratic states and private power

Periods of exuberant investment often lead too many firms. Too many similar AI models, platforms, or tools is likely already outpacing any realistic demand. When the inevitable market correction comes, only firms with access to capital or unique competitive advantages will survive, often through acquisitions or strategic partnerships.

How will firms access this capital? There are likely three possible outcomes:

None of these are inevitable, but they’re plausible paths if profit maximization becomes the overriding goal, especially when capital quickly dries up.

A significant danger comes when capital blends with state power. The firewall has long been eroding because states depend on private information systems, from internet service providers, to banking, to doorbell cameras. The blurring of information technology and governance went into hyperdrive with the front-row seat to Trump’s inauguration that tech billionaires bought. This acceleration is already playing out worldwide as governments contract other monopoly providers of services, but do not (or cannot) regulate them as though they are utilities, further blurring the lines between democratically controlled states and private power.

L’État, c’est nous.” [A robot made this too.]

Steering us away from a deterministic world

Let’s hit pause on the dystopia for a moment. We collectively can make choices to preserve our future autonomy as individuals and as a society.

As artificial intelligence becomes more embedded in daily life — shaping what we see, suggesting what we buy, drafting what we write — the need to protect human agency becomes more urgent. Agency is not just freedom from surveillance or bias; it’s the ability to act deliberately in a world increasingly optimized for prediction. Without strong safeguards, we risk building a world of passive users instead of active citizens.

That’s why we need a pro-agency framework: a set of design principles and policy reforms that put human autonomy at the center of AI governance.

Agency means the capacity to make free and informed decisions. It means you have the freedom to make bad decisions and deal with the consequences. The danger is not a coercive AI but a seductive AI. And this is not just a matter of consumer choice — it’s a democratic one.

Without agency, there is no dissent. No deliberation. No real choice.

The Role of Collaborative Governance

AI is already proving its value as a tool for summarization, text generation, and automation. But it needs governance structures that maximize the benefits for individuals and communities.

Policy makers don’t need to regulate the future — they need to regulate the interface between humans and machines today. The predominating safeguards approach (as in Europe) pays too little attention to the upsides of AI and not enough to balancing positive freedoms. A pro-agency framework would include tools to achieve three goals:

  • Maximize human choice
  • Limit monopoly power
  • Limit abuses of private and public power

Tools to maximize human choice

Transparency requirements are mandates that AI systems disclose how user data is used, how outputs are generated, and what goals the system is optimizing for (e.g. engagement, efficiency, revenue, etc.). For the purposes of this article, specific transparency requirements should revolve around government-corporate cooperation.

  • Why it matters: Transparency allows users to understand — and potentially contest — the influence AI has on their decisions. It turns hidden processes into visible ones.
  • Role of collaborative governance: Civil society, researchers, and watchdogs can help shape enforceable disclosure standards and hold developers accountable, ensuring that transparency is meaningful, not performative.

Preference dashboards are user-facing tools that show what preferences, assumptions, and behavioral inferences the AI has made about you — and allow you to adjust or erase them.

  • Why it matters: This restores some control to users, making the personalization process transparent and reversible. It also helps users reflect on how their digital behavior is being interpreted.
  • Role of collaborative governance: Standard-setting bodies and consumer groups can help design transparent, comprehensible dashboards. Governments can mandate that major platforms provide them and ensure accessibility across languages and literacy levels.

Cultural pluralism ensures that AI systems reflect diverse worldviews, moral frameworks, and knowledge traditions — not just Western liberal or technocratic paradigms.

  • Why it matters: A one-size-fits-all model can flatten difference and marginalize communities whose ways of knowing don’t align with dominant datasets. Users should be able to be aware of what world views they are seeing, understand that these worldviews are not all-encompassing, and be able to explore others.
  • Role of collaborative governance: Indigenous, religious, linguistic, and philosophical communities must have a seat at the table when AI models are designed and evaluated. Governance frameworks should require cultural impact assessments and pluralistic data inclusion.

Tools to limit monopoly power

Public options are necessary for government-funded or publicly commissioned AI systems — especially for education, healthcare, and legal access — that operate under principles of equity, accountability, and public service.

  • Why it matters: Not everyone can afford premium tools. Public options ensure that high-quality, non-extractive AI is available to all, regardless of income or market power.
  • Role of collaborative governance: Public procurement and co-design with communities can ensure that these tools reflect local needs and values. Civil society organizations can help monitor their use and recommend improvements.

Regulatory tools, such as anti-trust tools, contracting safeguards, and public utility regulation principles, are necessary to limit monopolies by dominant AI providers. This is especially important for entities that control both infrastructure (like AI models and computer chips) and monetization (like advertising and data).

  • Why it matters: A handful of vertically integrated firms controlling the entire AI pipeline could stifle competition, distort information flows, and make users dependent on manipulative ecosystems.
  • Role of collaborative governance: Consumer protection agencies, legislative bodies, and civic technologists can work together to design fair competition frameworks, including structural separations (e.g. between advertising and model development) and ethical procurement rules.

Tools to limit abuses of private and public power

Clear legal protections that follow the rule of law and place limits on overbroad surveillance or information retrieval are needed, especially in sensitive areas like political speech, health, or location data.

  • Why it matters: As AI systems learn to infer intent and behavior from vast search and data histories, unchecked capabilities can erode privacy, chill speech, and concentrate power in the hands of a few platform owners.
  • Role of collaborative governance: Legal experts, public interest groups, and digital rights organizations must help shape guardrails on data access and retrieval, ensuring that constitutional principles like freedom of thought and due process are respected in the digital age.

Agency audits are independent tests to evaluate whether AI tools support or undermine users’ freedom to choose, explore alternatives, or act against predicted preferences. Governments and civil society can help develop standards to audit AI as to how well it supports freedom and choice.

  • Why it matters: Even well-intentioned systems can subtly reduce agency by nudging users toward default behaviors. Audits help identify design patterns that limit user autonomy.
  • Role of collaborative governance: Governments can require such audits by law, while civil society and academia contribute frameworks, methodologies, and watchdog functions to ensure the results are credible and fair.

Liability regimes are a layered system of accountability mechanisms — combining courts, administrative triage, insurance, and alternative dispute resolution (ADR) — to determine responsibility and redress when AI causes harm.

  • Why it matters: In many contexts, especially those with weak judicial systems or limited legal access, traditional liability processes are slow, costly, or ineffective. Victims of algorithmic harm (such as denied healthcare, defamation, surveillance) may have no clear path to remedy. Without mechanisms to assign responsibility, AI firms can externalize risk and scale harm.
  • Role of collaborative governance: There are several options. Governments can establish administrative AI ombudsman services or fast-track complaint boards. Civil society groups can help gather and verify claims, while platforms can offer in-built ADR mechanisms like user appeals panels, as pioneered by eBay and Facebook. Insurers and developers can co-create risk pools for AI-driven harms. In fragile governance environments, distributed accountability mechanisms — like multi-stakeholder trust boards or mobile-access ADR channels — can provide stopgaps while institutions mature.

Human Autonomy Should Not Become a Luxury

One of the most urgent dangers is that AI becomes bifurcated: a high-trust, neutral assistant for elites, and a free but manipulative system for everyone else. Human agency must not become a premium feature.

If we want a democratic AI future, we need baseline protections in all tiers of service, investment in civic infrastructure as well as commercial tools, and state capacity to regulate major services.

The choice is still ours. We don’t have to accept a world where AI nudges you into your perfect day every day. We can design for deliberation and autonomy, for systems that help us think instead of thinking for us.

--

--

OGP Horizons
OGP Horizons

Published in OGP Horizons

Today’s and tomorrow’s problems can not be solved by governments alone — they will require all of us to evolve, together.

Open Government Partnership
Open Government Partnership

Written by Open Government Partnership

75 national & 150 local governments, plus thousands of civil society groups, working to deliver the promise of democracy beyond the ballot box through #OpenGov.

No responses yet