Blind Spots: Three Ways to Think about AI and Corruption Risks

Artificial Intelligence (AI) will change the nature of corruption: how it occurs, where it occurs, and how fast it happens.

Open Government Partnership
OGP Horizons
6 min readMay 21, 2024


By Joseph Foti

Artificial Intelligence (AI) will change the nature of corruption: how it occurs, where it occurs, and how fast it happens.

Thus far, much speculation on these changes has emphasized the risk of “malign use” of technology. While this emphasis is correct, it misses two important problems: the possibility of “market failure” and the “manipulation” of AI regulators.

AI and corruption will be a recurring theme on OGP Horizons, so consider this entry an overview of emerging risks and a selection of open government responses. In coming months, this blog will look at the positives that such technologies may bring as well.

Malign use

The risk

Perhaps the most frequently cited concern is the use of new technologies for corrupt purposes. There are applications of AI that can be used to break the law or capture public goods for private gain. Here is an incomplete list of possibilities that AI may accelerate, in the absence of countermeasures:

  • Money laundering: Money laundering relies on creating complex paths to move ill-gotten gains into legitimate spaces. AI can accelerate the movement of money and the complexity with which it does so. Specifically, money laundering uses of AI might include the falsification of financial records, obscuring the actual ownership of stolen money, or hiding bribe payments.
  • Disinformation: Disinformation is already part of the “Spin Dictator’s” toolkit — AI almost certainly will grow this portfolio. Domestic and international opponents of political actors may face personal attacks, and the public may increasingly become subject to false documents or false news stories.
  • Disenfranchisement: In the United States and elsewhere, the use of algorithms for drawing voting district maps has been under decades of scrutiny, with the accusation that such efforts undermine democratic representation. Without adequate controls, this situation could go from bad to worse.

The open government response

In each of these cases, there is a need to establish clear jurisdiction for which agencies control each of these issues. At a minimum, where the government is the primary user of AI, there needs to be transparency around the aims, decision-making authority, and evaluation of such applications. In some cases, co-regulatory approaches may be essential, which allow government actors to work with non-state actors. In many cases, there will also need to be limitations on the extent of regulation, especially around disinformation, to ensure that free political speech is not inhibited. Finally, government and civil society can play a role in encouraging standards for authentication and supporting fact checking.

Market failure

The risk

“Market failures” include situations of monopoly, the privatization of public goods, and situations of asymmetric information. This is a long way of saying that sometimes, as citizens, we cannot simply choose another product or service; the market “fails” in these cases. We cannot just go buy another electricity utility, a new train system, or a new government.

New technologies are subject to market failures in two ways.

  1. Natural monopolies. Governments must invest public money into “natural monopolies,” which are often utilities, those things which we could not have in the quantity we want without additional subsidy. Examples range from sewage to low earth orbit satellite arrays. These are subject to waste, fraud, and abuse if they are not closely overseen.
  2. Regular monopoly. With new technologies in particular, there is a tendency for leading companies to quickly attain monopoly or oligopoly status to maintain profits. Indeed, this is notoriously the objective of numerous technologists. Economists will tell you that this has the result of reducing public welfare — monopoly profits go to the seller, while benefits diminish to the public.

When such monopolies provide public goods or utilities, they can become unaccountable private tyrannies. In these cases, they may move what once was a public, democratically controlled space into a privately controlled space. In other cases, governments may contract these services, increasing risk to public goods and encouraging monopolies. A few concrete examples:

Online public forums: In many countries, there are strong rules on the use of public spaces for free speech and concrete rules governing how officials conduct themselves in those public spaces (for the sake of record-keeping and beyond). This may range from the banal (banning replies on X) to the genocidal, when officials use public forums to commit crimes and escape accountability.

Contradicting public values: Recent examples of Elon Musk’s behavior highlight where technology contracts run counter to public values.

  • Intelligence satellites: The US government cooperated with Musk’s Starlink to provide communications and imaging services. This intelligence was then shared with Ukrainian allies following Russia’s full-scale invasion. Concerns continue that communications may have been cut off to Ukrainian intelligence, especially in occupied territories. (The concern is not only North American, as more than 60 countries depend on Starlink for communications and imaging.)
  • Superchargers: Further concerns around the future of climate change policy emerged after Musk vacillated on the expansion of electric vehicle supercharging stations around the United States. This is after the US government committed 5 billion USD to support the deployment of these investments.

These problems are not new to AI — indeed, they exist in any rapidly evolving market as competitors race to grab profits or enclose common pool resources. However, in the absence of public oversight, these dynamics put democratic values at risk. These problems are worsened in situations where state capacity to regulate is diminished (often called digital sovereignty) and where contracts are not public.

The open government response

Where monopolies emerge and public goods are privatized, information is unevenly spread — and transparency, civic participation, and public accountability have a role to play in correcting this imbalance, such as by:

  • Maintaining established norms on transparency and public oversight when officials use privately owned technology for public purposes;
  • Ensuring public tenders are aligned with strategic goals and public values and, especially in cases of monopoly, providing protections for contradiction;
  • Ensuring open, democratic oversight of contracts, especially as relates to national security, climate, or other democratically stated values; and
  • Strengthening anti-monopoly controls, including the right of the public to submit complaints and evidence to regulatory authorities.

Manipulation of regulators

The final concern with AI is almost entirely analog. This concern is that AI proponents may actively seek to corrupt regulators. Our own research with AltAdvisory, a public interest privacy law firm based in South Africa, found regular intimidation of privacy regulators — whether through whispers, speeches, or budgetary processes. AccessNow’s recent report confirmed the finding that data protection authorities are under pressure across the continent of Africa, whether through budgets or outright intimidation. Beyond data protection, we have ample evidence that cryptocurrency promoters may have bribed officials (and that officials may have demanded bribes) to allow them to take a larger market share and avoid certain types of regulation.

The open government response

AI regulators — whether competition authorities, data protection authorities, financial intelligence units, or consumer safety bodies — can become more resilient when they are independent and they cooperate with civil society and other non-government actors. Ensuring that they are mandated and able to act with independence, integrity, and transparency is essential to their success. They should be able to take evidence from the public, clearly justify their actions in the law, and create opportunities for feedback on how well they are meeting their mandate.


AI presents tremendous opportunities. For those of us concerned with democracy and open government, we need to be aware of all of the risks to our systems. But our experience of the last several decades should not only alert us to the promises and perils, but also the solutions. We know that we will need strong public institutions, working together with civil society and journalists, if we are to take ensure that AI strengthens rather than diminishes our democracies.



Open Government Partnership
OGP Horizons

75 national & 104 local governments, plus thousands of civil society groups, working to deliver the promise of democracy beyond the ballot box through #OpenGov.