Can governments control the future of AI? Looks like they’re going to try

Taylor Armerding
Nerd For Tech
Published in
8 min readApr 1, 2024

For coming on 18 months now, since the release of OpenAI’s ChatGPT and a slew of competing chatbots, there have been nonstop dueling predictions about whether artificial intelligence (AI) will create utopia or dystopia, cultural and community bliss or destruction, nirvana or hell on earth.

Or, as Gordon LaForge, senior policy analyst at New America, put it in a recent roundtable discussion, “it is far from clear what AI will mean for society — or even what it is and how to think about it now. The metaphors out there are polarizing and at times extreme.”

“Depending on who one talks to, generative AI (GenAI) might be a solicitous personal assistant or a cutthroat management consultant; a form of social collaboration or a sentient being; “Moore’s Law for everything” or a nuclear weapon,” he wrote.

Which should not be a surprise. AI is a tool. Like any tool, it can and will be used for good and evil. It’s just that AI has more power and reach than most tools. “Evil on steroids” vastly understates the ominous possibilities when something has global reach.

Which is also why an increasing amount of the intense discussion, debate, and maneuvering about AI is focused on how to prevent, or at least limit, what bad people can do with it.

In just the past few weeks

  • New America convened the roundtable noted above, titled Power and Governance in the Age of AI, which invited “experts in international relations, computer science, and technology policy [to] share their thinking on how governments and institutions should navigate AI to harness its strengths and mitigate its risks.”
  • The Biden White House announced new rules for government use of AI. Starting Dec. 1, agencies will be required to have verified that AI tools they use don’t endanger the rights and safety of Americans through things like biased results. Agencies will be required to publish a list of AI systems they use, an assessment of the potential risks from those systems, and a plan for managing those risks. The rules are a response to President Biden’s Oct. 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
  • Reuters reported that the United Nations General Assembly unanimously adopted the first global resolution on AI that “encourages countries to safeguard human rights, protect personal data, and monitor AI for risks.” All worthy goals — Brad Smith, vice chair and president of Microsoft wrote on X that the resolution “marks a critical step toward establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.” It’s just that, as Ars Technica noted, “Being a nonbinding agreement [it is] thus effectively toothless.”
  • Bruce Schneier, author, blogger, chief of security architecture at Inrupt, Inc. and self-described public interest technologist, in an essay for the New America roundtable, called for a public AI option, “not to replace corporate AI but to serve as a counterbalance — as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete,” he wrote.
  • The EU passed the AI Act, the world’s first comprehensive regulation of AI systems. According to the EU, the Act assigns applications of AI to three risk categories. “Applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Last, applications not explicitly banned or listed as high-risk are largely left unregulated.”

All these proposals or initiatives are obviously well intended, but also raise the obvious question: Is any of this possible? When it comes to AI, you can pick your metaphor — the horse is out of the barn, the train has left the station, Pandora’s box is wide open, the genie is out of the bottle, etc. Plus, when it comes to any tool or technological advance, criminals have never felt constrained by resolutions, regulations, or guardrails.

Even the call for equity in access to AI tools seems a bit late. Alison Stanger, a professor at Middlebury College and a participant in the New America roundtable, wrote that she told her students they were welcome to use the free version of ChatGPT, but not the paid subscription model “for purposes of equity” — it wouldn’t be fair for only the “privileged” to be able to use the better tool.

It’s just that ChatGPT isn’t the only GenAI tool out there — any student who could afford it could easily get access to various others.

Exponential change

Yes, as a disruptive technological force AI is still in its infancy — LaForge described it as “just the next phase of the decades-long disruptions of the digital revolution” that includes the internet, social media, big data, and autonomous machinery, including weapons. But then he added “AI is also different. The pace of change is not just rapid but exponential.”

And as everybody knows, nobody has ever described any governmental pace of change as “exponential.” So would any government, or group of governments, really be able to move quickly enough to control both the inequitable and malicious uses of AI? They don’t have a stellar track record controlling cybercrime or the surveillance power of big data so far.

There are mixed views on that.

Beth Linker, director of product management with the Synopsys Software Integrity Group, agrees that “government generally moves more slowly than tech, and that’s particularly true when it comes to AI.” But then, they add that “government regulation is not supposed to get deep into the weeds. It is supposed to provide a broad framework for interpretation.”

Curtis Wilson, staff data scientist with the Synopsys Software Integrity Group, agrees, saying that “well-written regulation should be abstract enough to cover a wide range of cases and anticipate future changes.”

Indeed, the goal of government regulators is to order a result but not prescribe how to achieve it — that’s left up to the private sector.

Schneier’s view, as noted earlier, is that government can offset what he calls the “ominous” centralized control of GenAI by Big Tech with a public model that would bring numerous benefits, among them resolving contentious legal issues such as the use of copyrighted works to train AI models.

In his essay he wrote that the public model could “serve as an open platform for innovation, on top of which researchers and small businesses — as well as mega-corporations — could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.”

That, he added, would lead to an AI system where “transparency, political oversight, and public participation can, in principle, guarantee more democratically aligned outcomes than an unregulated private market.”

To which several of his blog readers responded with a collective, skeptical “good luck with that,” arguing that transparency and accountability are essentially nonexistent in government.

A comment by “Guerin” was typical, contending that “the U.S. federal government does nothing well or efficiently.”

Rich with risks

Sammy Migues, principal at Imbricate Security, is a bit less skeptical but agrees that there are numerous risks with AI. “Like the internet and other means of mass communication, AI today has the power to be a weapon of mass disinformation,” that includes fake images, video, voices, text, and more. He added that the technology “can provide entire software modules, workflows, checklists, recipes, and so on for both good and nefarious purposes. In short, AI today makes it much easier to do socially and legally unacceptable things, which means such things will happen more often.”

He said part of the reason regulation doesn’t keep up with big technological advances is because “they are almost always big in retrospect, not years in advance.”

It’s also not in the nature of tech pioneers to ask for permission to do things that might be legally fuzzy. “Could they have asked if it was OK to hoover up every bit of public — and perhaps not so public — data on the planet? Yeah, probably,” Migues said. “Could they have asked if it was OK to have AI process employee applications, mortgage applications, hospital records, traffic records, and almost everything else on the planet and make highly informed and yet totally uninformed decisions that directly affect people’s lives? Yeah, probably.”

But they didn’t, of course. So he suspects that “other than things that are clearly illegal, it’ll take years for ethicists, philosophers, diplomats, and others to provide actionable guidance for AI vendors.”

Regulatory friction needed

Sarah Myers West, managing director of the AI Now Institute, a former senior adviser on AI at the U.S. Federal Trade Commission, and another participant in the New America roundtable, is also dubious about governments’ capability to control AI. “Can any single nation amass sufficient regulatory friction to curb unaccountable behavior by large tech firms? If so, how?” she wrote.

But Wilson thinks that while it might soon be too late, there is still time to build some guardrails into the use of GenAI. “Currently, most products that use GenAI tools do so in a very shallow way — a small chatbot, a little add-on. If they had to take it down tomorrow, it would not be a big deal to the product. Current GenAI products can easily adapt to new regulations,” he said.

He also believes the “perceived advancement of AI is sometimes faster than the actual advancement. The GPT line of models is based on research from 2017. GitHub Copilot [an AI coding assistant], which most people started talking about in 2023, was actually released in 2021. Both had been in development for much longer,” he said. “So don’t confuse the rate at which they rose to popularity as being the rate at which they were developed.”

There are also calls for open source GenAI models, given that open source software has made software products more accessible to companies of any size. At present, more than 75% of code components in modern software products are open source.

Wilson said open source is already deeply embedded into AI through the large language models (LLM) used to train them. “Many of the biggest LLMs are already open source,” he said. “They generally have large companies behind them that open source the model because they are interested in building apps that use AI rather than selling AI itself. In that case having more eyes and a community of developers working on and improving the model is an incredible advantage to them.”

But accessible and controllable are two different things. Migues agrees that “if you’re trying to make the technology more available to everyone, then an open source model is great.”

The caveat is that “if you’re trying to manage the downstream effects of subtle societal manipulation over time, then you’ll need another answer.”

--

--

Taylor Armerding
Nerd For Tech

I’m a security advocate at the Synopsys Software Integrity Group. I write mainly about software security, data security and privacy.