Analysis | What the tech industry gets wrong about the risks of AI

Zed Tarar
The Diplomatic Pouch
8 min readOct 25, 2023

--

And what policy experts should focus on instead

This post is part two of three pieces on the practical relationship between advances in artificial intelligence and diplomacy.

It is part of ISD’s ongoing blog series, “A better diplomacy,” which highlights innovators and their big ideas for how to make diplomacy more effective, resilient, and adaptive in the twenty-first century.

An illustration of an army of robots marching down a city street.
All images generated using MidJourney

Artificial intelligence will destroy humanity. Or enslave it. Or will make humans redundant and lead to the collapse of society as all but a few of us face unemployment. These are only a few of the farfetched doomsday scenarios that tech industry insiders use to justify calls for government intervention in the sector. Big tech incumbents are clearly using these scenarios to justify regulatory capture to prevent new market entrants.

These arguments make it impossible to hold a rational conversation on building regulatory frameworks. Although AI indeed carries risks, they are not existential. Instead, government officials should recognize that restrictions on AI could impose their own costs on the industry. Any new policies should avoid stifling research and development.

Instead of allowing incumbents to craft restrictive rules that keep out new market entrants, policy professionals should work with a broad coalition of partners to create a self-policing system based on sound game theory. That means creating a self-balancing system where risks are minimized without entrenching incumbents.

Beware of overhyped doomsday scenarios; they’re hardly better than blind guesses

Everyone seems to have an “expert” judgment on the future of AI and humanity. For example, AI doomsday scenario proponents will argue that a rogue AI programmed with a simple task, such as making paper clips, could optimize that task beyond the original creators’ intent. This AI could decide humans are an obstacle to obtaining more resources to make paper clips. It could then use its immense intelligence to kill all humans on earth, converting the entire planet, and eventually the solar system, into a paper clip producing machine. As absurd as this sounds, those who argue that AI poses an existential risk to humanity frequently veer into science fiction.

Yet, many who espouse these doomsday beliefs are leading experts in the field, not enthusiastic amateurs. Should we heed their warnings? Not exactly.

Philip Tetlock and colleagues’ research shows a clear pattern of experts underperforming generalists in forecasting competitions, with those who have the most detailed knowledge often overestimating the odds of catastrophe by several orders of magnitude. Whether we ask nuclear physicists what the odds of an apocalyptic atomic war are or an AI researcher on the threat of extinction, the pattern stays the same — domain experts overestimate risk compared to non-experts and are less accurate in their forecasts. Nonetheless, we still need experts — we all prefer a skilled expert piloting our commercial flights, not a hobbyist.

Instead, we should acknowledge the dynamism and rapid change within AI research. Geoffrey Hinton, the leading mind behind neural networks and today’s generative AI renaissance, said recently, “My main message is there’s enormous uncertainty about what’s going to happen next.”

With that uncertainty in mind, he points to three risks (which I have written about previously), including disinformation, mass unemployment, and autonomous warfare. As Hinton suggests, the latter two risks are hypothetical and difficult to forecast.

Focusing our efforts on the societal risk and negative externalities of AI today and in the near future means we can sidestep the debate on forecasting altogether.

The present-day risks from generative AI are limited

The conversation becomes banal when we move beyond the hypothetical risk from intelligent software into the present-day real-world negative externalities. Fabricated images, videos, and text with convincing likenesses to celebrities are certainly a nuisance. However, they are merely an iterative improvement on earlier technology such as Photoshop.

An illustration of a woman holding a phone against the backdrop of a city.

New tools can create images and videos at the touch of a button and lower the barrier to entry, allowing anyone, including youth, to produce harmful content. Cyberbullying and online harassment are already a burden on society, with over half of teen girls reporting to be a target of abuse. It takes little imagination to see how a technology that produces convincing fake photos in 30 seconds on a smartphone would turbocharge such abuse. Still, as alarming as this technological change is, it represents another step in an ongoing trend, not a new paradigm itself.

Similarly, the threat of state-sponsored disinformation campaigns seems simply a matter of scale, not new capabilities. Sophisticated actors have had access to techniques to produce doctored photos, videos, and text for decades. The real harm is in the failure to address the conditions that make people susceptible to disinformation in the first place. Dr. Erik Nisbet and colleagues, leading researchers on public diplomacy, write:

The scientific evidence about disinformation strongly suggests that belief in false or misleading information is driven more so by individual emotional and cognitive biases, sometimes amplified by macro social, political, and cultural trends, than specific information technologies.

In other words, the foundation that allows a state actor to promulgate a disinformation campaign successfully rests on pre-existing grievances among a minority population, disenfranchisement, or perceived threats to dominance.

Many non-state actors resort to simple technologies to spread propaganda without the need for complex synthetic media. For example, X, formerly known as Twitter, is rife with false or misleading information about the ongoing conflict in Israel and Gaza. One viral video purporting to show Hamas militants shooting down an Israeli helicopter is footage from a PC game called Arma 3. Why resort to more complex synthetic media when simply relabelling old footage or splicing together video game capture is enough to spread propaganda?

By contrasting this muted marginal risk to society with the opportunity cost of curtailing research into new AI techniques, the tradeoff seems clear: we should be more concerned with technology companies erecting barriers to entry than synthetic media.

When incumbents capture regulation, everyone loses

The risk of regulatory capture is real. It allows established players to skew rules to their advantage, stifling innovation and competition. The concept, popularized by Nobel laureate economist George Stigler in the 1980s, posits that incumbents will leverage their influence over the administrative state to erect barriers to entry.

Incumbents have an incentive to keep new competitors, especially disruptive and innovative ones, from entering the market. Modern examples of regulatory capture and rent-seeking behavior abound, from the pharmaceutical industry to energy. Economist Thomas Sowell writes, “Although a free market economic system is sometimes called a profit system, it is in reality a profit-and-loss system — and the losses are equally important for the efficiency of the economy.” In other words, creative destruction is a fundamental part of a well-functioning free market economy. Without incumbents being displaced by innovative new rivals, the cycle of continuing improvements breaks, and stagnation sets in.

Given a firm’s incentive to prevent rivals from entering a market, it comes as little surprise that OpenAI’s CEO Sam Altman calls for more regulation or that disgraced FTX founder Sam Bankman Fried lobbied for crypto regulation before his allegedly fraudulent scheme came apart. The latter appears to be a blatant attempt to force competitors into a costly regulatory framework, according to insiders.

No regulatory intervention in AI is without trade-offs. Alex Chalmers, a London-based venture capitalist, and co-author Nathan Benaich, make this point well. Greater regulatory burdens on industries, sometimes warranted, inevitably slow capital investment and innovation. The nuclear power industry is a prime example — the technology saw steady improvements through the 1970s until a thicket of complex regulation slowed capital and talent flowing into the sector, according to technologist and author J. Storrs Hall. Chalmers and Benaich note the European Union’s draft AI regulations could entrench incumbents while smothering open-source initiatives in their infancy.

Prescriptive regulations are like pruning shears, snipping off branches of potential futures, and depriving society of untold benefits. This lost productivity and advancement might be invisible to us now, but they are certainly real.

A better way to mitigate risk using game theory

While accurately estimating harm from generative AI may be difficult, the public sector can still engineer systems that encourage positive externalities (such as sharing knowledge through open source methods) while mitigating negative externalities (such as creating large language models that produce harmful content). Diplomats will have a central role, bringing together multinational businesses, governments, and international organizations and aligning them around a central line of effort.

If rigid regulatory structures tend to stifle innovation, what framework can the public sector employ to cap the downsides of generative AI? I turned to London Business School professor and game theory expert JP Benoit on how the discipline can inform policy-makers.

First, why would AI companies produce models that create harmful content? The reputational fallout from a chatbot that delivers hateful or extremist output is well-documented, so it would seem to go against a firm’s interest to create harmful tools.

Professor Benoit urges caution with this line of reasoning.

“You may be in a race-to-the-bottom situation,” he notes, “for example, looking at professional sports, you might ask, why would an athlete take steroids?”

Professional competition creates a prisoner’s dilemma, where every athlete would be better off avoiding steroids yet is compelled to take them or risk being the only one left out and underperforming. One could imagine a similar dynamic in large language models where smaller incumbents loosen restrictions on use to attract more customers, spiraling further downward through the industry.

Instead, designing a system that encourages cooperation and avoids prisoner’s dilemmas might be possible if enough transparency is implemented. For example, having generative AI companies voluntarily submit to an ethics code could work if defectors who breach the agreement are easily identified. Under the auspices of central government and international organizations, AI companies could implement monitoring and punitive measures, adding a cost to defection and creating a deterrence. Firms that produce LLMs that spout hateful propaganda could be delisted from search rankings and app stores, limiting their reach. Demonetizing these tools from advertising revenue and from the financial system could further limit their exposure.

Detractors may be quick to lampoon “self-regulation” in the tech industry. Yet nurturing a system modeled on sound game theory where firms are incentivized to abide by a set of principles while facing stiff penalties for defecting is far from industry-led self-policing. The public sector and a wide array of stakeholders must lead the design of any framework to de-risk generative AI, not industry giants alone.

Instead of fearing the harms of generative AI, we should fear what we might lose if we squander the opportunity to put generalizable intelligence in everyone’s pocket. Discussions on this topic are asymmetrical — the costs of state-led intervention are often portrayed as zero, while the dangers of AI are painted as existential. Public policy practitioners should swear their own Hippocratic oath, pledging to do no harm before proposing potential burdens on society while entrenching incumbents.

Zed Tarar advises startups and has an MBA at London Business School, where he specializes in the intersection of technology and policy. He has worked in five countries as a U.S. diplomat.

Disclaimer: While Zed Tarar is a U.S. diplomat, the views expressed here are his own and do not necessarily reflect those of the Department of State or the U.S. government.

For more on AI and diplomacy, check out some of Tarar’s previous articles:

--

--

Zed Tarar
The Diplomatic Pouch

Zed is an MBA candidate at London Business School where he specializes in tech. An expert in messaging, he’s worked in five countries as a US diplomat.