Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control

Adam Thierer
23 min readMay 29, 2023

--

Source: Bojan Tunguz

Just nine days after OpenAI CEO Sam Altman testified before the U.S. Senate Judiciary Committee on May 16th and called for an ambitious, but somewhat ambiguous, new regulatory agency and licensing regime for artificial intelligence (AI), Microsoft filled in the details with its May 25th release of a new white paper on, “Governing AI: A Blueprint for the Future.” The Microsoft AI regulatory plan, taken together with OpenAI’s short “Governance of superintelligence” blog post from May 22nd, envisions a comprehensive computational control regime to address concerns about powerful AI systems. The Microsoft-OpenAI approach could become the new baseline in policy debates about the governance of AI and lead to heated policy battles about computational freedom more generally.

Some of the “AI regulatory architecture” that Microsoft proposes in its new Blueprint is fairly straightforward and less controversial, such as calls to utilize existing regulatory authority or greater transparency steps to address algorithmic concerns. The report also discusses using other options to address AI worries, including public-private partnerships, multistakeholder approaches, and educational steps such as digital literacy and awareness-building mechanisms about risks. There are details to be worked out there, but many people (including me) have discussed how those approaches can serve as the basis of sensible AI governance. Thus, there’s much to like about most of what is in Microsoft’s AI Blueprint.

But Microsoft and OpenAI call on governments go much further to regulate AI and limit supercomputing capabilities. The most important thing to understand about the Microsoft-OpenAI approach is that it is holistic. It seeks to have government exert control over the entire AI production stack, right down to the core computational capabilities of these systems. Applications, models, and data centers would all be regulated under their plan. And the regulations they call for would involve both ex ante mandates (licensing and other pre-market approvals) and ex post controls (such as various post-market monitoring and tracking requirements). This essay will explore these ideas and similar proposals for AI regulation through holistic control of computing and computation more generally.

AI Licensing + A Computational Control Commission

The most controversial portions of the Microsoft AI Blueprint are found on pages 19–21 of their new white paper. Building on Sam Altman’s May 16th Senate testimony, Microsoft calls for the creation of a licensing regime for “highly capable models at the frontiers of research and development,” as well as “the establishment of a new regulator to bring this licensing regime to life and oversee its implementation.” Their “multitiered licensing regime” would include regulations requiring:

· advance notification/approval of large training runs;

· comprehensive risk assessments focused on identifying dangerous or breakthrough capabilities;

· extensive pre-release testing by internal and external experts; and,

· ongoing post-release systems monitoring.

Microsoft calls for data centers to be treated “much like the regulatory model for telecommunications network operators,” but then they’d also add a heavy dose of financial services regulation on top. “To obtain a license, an AI datacenter operator would need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture, and potentially export control compliance,” the report notes. A series of so-called “KY3C” regulations would apply, such as: “Know Your Customer,” “Know Your Cloud,” and “Know Your Content.” Again, these requirements would entail pre- and post-monitoring mandates under the new licensing requirements for both AI model builders and data centers.

But which specific developers and data centers will be covered? This is where things get tricky and Microsoft acknowledges this challenge. They say “developers will need to share our specialized knowledge about advanced AI models to help governments define the regulatory threshold.” Typically, most industry-specific laws and regulations are triggered by firm size, usually measured by market cap or employee size. In this case, Microsoft and OpenAI are instead suggesting that the regulatory threshold will be measured by overall compute potential, with “powerful” new AI models or “highly capable AI foundation models” and “advanced datacenters” being the ones licensed and regulated. This is very important because, as will be discussed later, it means that new entrants and open source providers could be covered by the new regulations immediately.

But, by which measure shall we make the regulatory determination of what counts as “powerful” AI or “advanced” compute? “Defining the appropriate threshold for what constitutes a highly capable AI model will require substantial thought, discussion, and work in the months ahead,” Microsoft says. They say policymakers should “start with the best option on offer today — a compute-based threshold — and commit to a program of work to evolve it into a capability-based threshold in short order.”

Academic Calls for More Comprehensive Computational Control

Various academics have been exploring similar options for regulating AI and supercomputing. This recent article from Lennart Heim, a researcher at the Centre for the Governance of AI in Oxford, provides a good overview of the different ways of formulating compute governance. He points out how “the unique properties and state of compute affairs that make it a particularly governable node for AI governance,” and he then goes on to identify a number of existing and new regulatory mechanisms and strategies for controlling AI through the regulation of powerful compute — especially for chips and data centers. This can include the broader application of export controls, various supply chain regulations, or limitations on training runs and other hardware-based limitations.

These and other full-stack regulations were also detailed in a new essay on “12 tentative ideas for US AI policy” written by Luke Muehlhauser, a Senior Program Officer for AI Governance and Policy with Open Philanthropy. Of course, Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, has long endorsed steps such as these, but has also gone even further and suggested sweeping worldwide surveillance efforts will be needed on AI research and development efforts.

Source: Lennart Heim

Meanwhile, in a new paper on “Model Evaluation for Extreme Risks,” over 20 AI governance experts outline some ways to define the “capability-based threshold” that Microsoft suggests we need to adopt for purposes of regulating compute. Beyond just looking at the overall power of the underlying model or supercomputing centers, the nine variables shown in the table below would also be considered as potential triggers for regulatory requirements. Needless to say, many of these categories are quite open-ended and would entail complicated and contentious definitional disputes in their own right (including speech-related matters surrounding what is meant by persuasion, manipulation, disinformation, and political influence). I’ll have more to say about these problems in future essays.

But a more extreme variant of this sort of capability-based regulatory plan would see all high-powered supercomputing or “frontier AI research” done exclusively within government-approved or government-owned research facilities. Under such schemes, AI and supercomputing systems and capabilities would essentially be treated like bioweapons and confined to “air-gapped data centers,” as Samuel Hammond of the Foundation for American Innovation calls them. His “Manhattan Project for AI” approach “would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.” He says that “high risk R&D” would “include training runs sufficiently large to only be permitted within secured, government-owned data centers.” In his own words, this plan:

would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an ‘air gap,’ a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities could be overseen by the Department of Energy’s Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

Importantly, Hammond suggests that this is the best way to accelerate computational science within the United States. By contrast, most other pundits and academics who have floated similar ideas seem far more interested in slowing such underlying capabilities — and doing so through coordinated international control of high-powered computational models and/or data centers.

Writing in the Financial Times recently, Ian Hogarth calls “for governments to take control by regulating access to frontier hardware.” To limit what he calls “God-like AI,” Hogarth proposes such systems be contained on an “island,” which again involves “air-gapped” data centers. By “God-like AI,” he means artificial general intelligence (AGI) systems that exceed human intelligence. Hogarth says that, under this scheme, “experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”.

Source: Ian Hogarth

Hogarth says we can think of this scheme as a sort of “CERN for AI,” which is a reference to the largest particle physics laboratory in the world, which is based in Switzerland. But, in reality, he’s really talking about a far more comprehensive form of “strict international regulation” aimed at “removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organisation.” That is well beyond the scope of what CERN does today.

(Fantasy) Island Thinking

This “AI island” idea is probably better thought of as AI “fantasy island,” as Competitive Enterprise Institute regulatory analyst James Broughel argued in a recent Forbes column. Broughel said Hogarth’s proposal highlights “the outlandish nature of a precautionary approach to regulating AGI.” Hogarth himself notes that, “[p]ulling this off will require an unusual degree of political will, which we need to start building now.”

That might be the understatement of the year. As I’ll will detail in a big new R Street Institute report on “AI arms control” that is due out in a couple of weeks, such proposals represent wishful thinking in the extreme. It’s highly unlikely that anyone is going to agree to anything like this. Governments, academic institutions, labs, and companies have invested billions in building out their supercomputing capacity for a broad range of purposes and they are not about to surrender it all to some hypothetical global government AI super-lab. And, once again, no matter how hard we try to draw up neat regulatory distinctions and categories, it is going to be very hard in practice to figure out what sort of foundation models and data centers get classified as having “highly capable” or “advanced” capabilities for purposes of figuring out what’s inside and outside the walls of the “AI Island.”

But, for sake of argument, let’s ask: what sort of global governance body would run this system? I suppose that the United Nations is the most likely contender for the job. As I’ll I note in my forthcoming report on AI arms control, the U.N.’s history with nuclear and biological arms-control efforts probably does not bode well for AI computational control efforts. Ignore the fact that non-state actors (such as terrorist groups) will not agree to be bound by such restrictions. The bigger problem is rogue states or nations that simply refuse to abide by the terms of such agreements and treaties even after signing onto them. This is the issue the world faces with chemical and nuclear nonproliferation efforts today — and not just with states like North Korea and Iran.

Consider, for example, the U.N.’s 1972 Biological Weapons Convention (BWC). The former Soviet Union signed it and then promptly returned home and told their scientists to ignore it. The USSR went on to secretly develop biological weapons on a massive scale. Russia still mostly ignores the treaty today. South Africa and Iraq were also later revealed to have ignored the BWC.

More shockingly, the UN last year allowed North Korea to take over as head of the organization’s Conference on Disarmament, even though, according to the Arms Control Association, the U.N. Security Council “has adopted nine major sanction resolutions on North Korea in response to the country’s nuclear and missile activities since 2006.” Moreover, North Korea withdrew from the nuclear “Treaty on the Non-Proliferation of Nuclear Weapons” in 2003. Even routine nuclear monitoring efforts often fail. In early 2023, the International Atomic Energy Agency (IAEA) reported that 10 drums containing approximately 2.5 tons of natural uranium previously being tracked in Libya had gone missing. If controlling physical weapons or dangerous materials is this challenging, it is hard to imagine how controlling algorithmic systems would be any easier.

Meanwhile, through their fulltime seats on the U.N. Security Council, China and Russia can single-handedly hold up progress on any sort of declaration calling for collective action on even the most mundane resolutions, like opposing Russia’s unprovoked war against Ukraine. Adding insult to injury, in March 2023, the UN allowed Russia to take over as head of the Security Council despite its continued war against Ukraine.

Finally, while critics sensibly decry “the illogic of nuclear escalation,” the threat of mutual destruction has not stopped major governments from continuing to spend lavishly on nuclear weapons. In fact, in 2022, Congress approved $51 billion in spending for nuclear weapons with President Joe Biden’s blessing. Meanwhile, Russia recently dropped out of its last remaining nuclear arms control agreement with the United States.

Thus, creating new institutions, treaties or declarations focused on AI existential risk likely would not have better outcomes than we’ve seen for these previous threats. It’s almost impossible to believe that China, Russia or even the United States would ever go along with any plan to centralize powerful AI research in an independent body far away from their shores. And even if they did agree to it, they’d continue developing powerful algorithmic (and robotic) systems covertly.

Even the “Manhattan Project for AI” proposal, which just tries to bottle things up at the national level in the U.S., is likely to fail. There’s no way America is going to essentially nationalize the entire supercomputing capacity of the country and put it all under the control of the Department of Energy, or some other computational control body. And, even if we did, good luck getting congressional appropriations sufficient for the job of making it work as advocates desire.

To be clear, Microsoft and OpenAI aren’t proposing we go quite this far, but their proposal raises the specter of far-reaching command-and-control type regulation of anything that the government defines as “highly capable models” and “advanced datacenters.” Don’t get me wrong, many of these capabilities worry me as much as the people proposing comprehensive regulatory regimes to control them. But their preferred solutions are not going to work. The scholars and companies proposing these things have obviously worked themselves into quite a lather worrying about worst-case scenarios and then devising grandiose regulatory schemes to solve them through top-down, centralized design. But we are going to have find more practical ways to muddle through using a more flexible and realistic governance toolkit than clunky old licensing regimes or stodgy bureaucracies can provide.

Some critical infrastructure and military systems will absolutely need to be treated differently, with limits on how much autonomy is allowed to begin with, and “humans in the loop” whenever AL/ML tech touches them. But for most other systems and applications we’ll need to rely on a different playbook so as not to derail important computational advances and beneficial AI applications. Endless red-teaming and reinforcement learning from human feedback (RLHF) will be the name of the game, entailing plenty of ex post monitoring / adjustments. We’ll also have to turn algorithmic systems against other algorithmic systems to find and address algorithmic vulnerabilities and threats.

Many existing regulations and liability norms will also evolve to address risks. They already are, as I documented in my long recent report on “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” Finally, the role of professional associations (such as the Association of Computing Machinery, the Institute of Electrical and Electronics Engineers, and the International Organization for Standardization) and multistakeholder bodies and efforts (such as the Global Partnership on Artificial Intelligence) will also be crucial for building ongoing communication channels and collaborative fora to address algorithmic risks on a rolling basis.

The Problem with Cozy Regulatory Relationships

I want to drill down a bit more on the idealistic thinking that surrounds grandiose proposals about AI governance and consider how it will eventually collide with other real-world political realities. Microsoft’s Blueprint for AI regulation assumes a benevolent, far-seeing, hyper-efficient regulator. The white paper spends no time seriously discussing the downsides of a comprehensive licensing regime via a hypothetic Computational Control Commission, or whatever we end up calling it. A new AI regulatory agency was floated in the last session of Congress as part of the “Algorithmic Accountability Act in 2022.” The measure proposed that any larger company that “deploys any augmented critical decision process” would have to file algorithmic impact assessments with a new Bureau of Technology lodged within the Federal Trade Commission (FTC). So, it’s possible that a new AI regulatory agency could come to possess both licensing authority as well as broad-based authority to police “unfair and deceptive practices.” It could eventually be expanded to include even more sweeping powers.

Under its proposed new licensing regime, Microsoft hopes to “establish a framework for close coordination and information flows between licensees and their regulator.” It sounds good in theory, but let’s consider what that might mean in practice. The upside of such a close regulator-regulatee relationship is that it could indeed help with the sharing of important information about system capabilities or vulnerabilities. But such a tight relationship could also lead to a cartelistic industry structure as regulators look to protect the handful of players under their control.

We have a lot of historical experience with the problem of regulatory capture. In his 1971 two-volume masterwork, The Economics of Regulation: Principles and Institutions, economist Alfred Kahn documented the problem with such a cozy relationship between regulators and regulated companies:

When a commission is responsible for the performance of an industry, it is under never completely escapable pressure to protect the health of the companies it regulates, to assure a desirable performance by relying on those monopolistic chosen instruments and its own controls rather than on the unplanned and unplannable forces of competition. […] Responsible for the continued provision and improvement of service, [the regulatory commission] comes increasingly and understandably to identify the interest of the public with that of the existing companies on whom it must rely to deliver goods. (pgs. 12, 46)

A self-described “good liberal Democrat,” Kahn was later appointed by President Jimmy Carter to serve as Chairman of the Civil Aeronautics Board in the mid-1970s and he promptly set to work with other liberals — including Sen. Ted Kennedy, Stephen Breyer, and Ralph Nader — to dismantle the anti-consumer aviation cartels that had been sustained through government licensing, entry controls, and price/rate-of-return regulation. Kahn believed that the cozy relationship between the regulators and regulated companies was so problematic that he then worked with President Carter to abolish his own agency!

I suspect similar problems would develop under a hypothetical Computational Control Commission. In fact, it strikes me that many of the academics and pundits floating licensing and bureaucracies for AI and compute today have very little experience with such regulatory regimes in practice. They seem almost blissfully naive about how they actually work, and they have not bothered going through any of the academic literature on the costs and trade-offs associated with them — especially for the public, which is then usually denied a greater range of life-enriching goods and services.

Let’s consider how some of those trade-offs could affect AI innovators, including Microsoft and OpenAI.

Alfred Kahn and President Jimmy Carter

A Thought Experiment about OpenAI & the Computational Control Commission

If Microsoft really believes that highly capable AI models and data centers pose such an immediate threat, then perhaps their critics were right to lambast them for releasing ChatGPT-powered services into the wild back in February. To be clear, I do not think it was a mistake for Microsoft and OpenAI to launch their products without prior approval. In fact, I jumped to their defense when they came under fire and I argued that the only way to really effectively “stress test” some these models is through widespread use by the public itself (married up with constant developer red-teaming and corrective RLHF.)

But with its new AI Blueprint, Microsoft is basically telling us that this decision should have been a formal regulatory process and that they and OpenAI should have required official licenses for ChatGPT tools, their integration into Microsoft products, and possibly even the underlying Azure data center compute capacity itself. Moreover, OpenAI’s recent move to launch a ChatGPT app for the Apple Store (as well as its earlier launch of 70 browser plug-ins) would both likely constitute violations of the new regulatory regime that Microsoft is floating. Had Microsoft’s proposed “AI regulatory architecture” already been in place, OpenAI might have been forced to have their lawyers and lobbyists submit some sort of petition for the right to operate “in the public interest.” Many, many months would then have gone by during which the new AI regulatory agency would have considered the petition. Then 5 unelected bureaucrats at the new Computational Control Commission would eventually get around to considering the proposed innovations via a pre-market approval regulatory regime.

Six months to a year later, we might get a ruling (it would probably take much longer) and then maybe the bitterly divided AI bureaucracy would approve the new OpenAI or Microsoft app, but with a long list of caveats and “voluntary concessions” attached. Microsoft Azure data centers could possibly be required to submit formal transparency reports to the new AI regulator and have federal inspectors visit more regularly, regardless of what trade secrets that might compromise. Meanwhile, conservatives (at the agency, on Capitol Hill, and in media) would issue dissenting statements blasting Sam Altman’s “woke AI” as being biased against conservative values. (It has already happened, folks!) A fiery hearing would be next in which Microsoft and OpenAI execs are dragged before the cameras for a good public flogging.

Of course, a cynic might say that Microsoft is actually fine with this nightmarish regulatory scenario because the company has been enduring this sort of treatment for over 25 years across the globe. They probably know how to weather a political storm better than any other tech company on the planet. They likely treat it as just another cost of doing business. Google, Amazon, Apple, Meta, Nvidia, Tesla, and IBM could probably also handle the compliance costs and somehow survive this mass politicization of computing. It would severely distract from their other innovative efforts and future investments, but they would weather this the same way they have survived the GDPR regulatory nightmare across the Atlantic: By throwing armies of lawyers and compliance officers at the problem.

But smaller rivals, new entrants, and open source providers are absolutely dead in the water under this system. “Enacting a licensing regime now could also cement the dominance of industry incumbents like Google and OpenAI by making it harder for startups to create foundation models of their own,” argues Timothy B. Lee in his newsletter Understanding AI. To better understand how such regulations will raise rivals’ costs and create formidable barriers to AI entry and algorithmic innovation, make sure to read this excellent essay from economist Lynne Kiesling.

Open source AI would become the first major casualty of the new war on compute. Writing at Fortune, Jeremy Kahn notes that “by their very nature, those offering open-source AI software are unlikely to be able to meet Microsoft’s KYC [Know Your Customer] regime, because open-source models can be downloaded by anyone and used for almost any purpose.” But it’s not just the KYC mandates that would kill open source AI. Under the scheme Microsoft and others envision, the government would likely lean hard on licensed providers and data centers to limit or deny access by anyone in the open source community. The Economist puts things event more bluntly in a new essay entitled, “Why tech giants want to strangle AI with red tape: They want to hold back open-source competitors.” I think that headline goes a bit overboard, but The Economist gets it more right when they note that these firms, “have much deeper pockets than open-source developers to handle whatever the regulators come up with.”

And that is exactly how we get left with a cozy little government-sanctioned computing cartel.

Other Costs & Consequences

When introducing the new Blueprint on May 25th, Microsoft President Brad Smith said America’s approach to AI policy should be summarized by the phrase: “Don’t ask what computers can do, ask what they should do” (which is the title of a chapter in a recent book he co-authored). The problem is that, under Microsoft’s “regulatory architecture,” what computers should do will become a highly politicized decision, with endless technocratic bureaucratism, and a permission slip-based, paperwork intensive process standing in the way of AI innovators and their ability to create life-enriching and lifesaving products.

Meanwhile, some of us actually want to know want computers and artificial intelligence CAN do to begin with. What can they do to help us preemptively detect and address strokes, heart attacks and cancers? What can they do to improve the environment? What can they do to help educate our children? What can they do to make our roads and skies safer? And so much more.

Of course, these are things that I am certain that Brad Smith and Microsoft would agree that computers and AI should do as well. But what he’s getting at with his “can vs. should” line is that there are some potential risks associated with high-powered AI systems that we have to address through preemptive and highly precautionary constraints on AI and computing itself. But the regulatory regime they are floating could severely undermine the benefits associated with high-powered computational systems.

Aligning AI with important human values and sensible safety practices is crucial. But too many self-described AI ethicists seem to imagine that this can only be accomplished in a top-down, highly centralized, rigid fashion. Instead, AI governance needs what Nobel prize-winner Elinor Ostrom referred to as a “polycentric” style of governance. This refers to a more flexible, iterative, bottom-up, multi-layer, and decentralized governance style that envisions many different actors and mechanisms playing a role in ensuring a well-functioning system, often outside of traditional political or regulatory systems.

But wouldn’t this new hypothetic Computational Control Commission or a global AI safety regulator be working “in the public interest” to protect our safety? Well of course that’s the theory many well-intentioned folks want to believe. But a single point of control is also a single point of failure. A single safety regulatory agency is also a singular safety vulnerability — not just to attacks, but to generalized mission failure. As I argued in my longer report on flexible AI governance strategies:

The process of embedding ethics in AI design is not set in stone. Aligning ethics is an ongoing, iterative process influenced by many forces and factors. We should expect much trial and error when devising ethical guidelines for AI and hammering out better ways of keeping these systems aligned with human values.

A new AI agency and licensing regime for compute could be bad in other ways. Again, it goes without saying that China is not going along with any of this. I doubt Russia will either. The Microsoft Blueprint alludes to the problem of getting everybody under the same regulatory regime globally. “We need to proceed with an understanding that it is currently trivial to move model weights across borders, allowing those with access to the ‘crown jewels’ of highly capable AI models to move those models from country to country with ease,” their white paper says.

This is the problem of global innovation arbitrage thatI have discussed at length elsewhere. Some might argue that we can just ignore the potential for cross-border migration of firms, capital, and code because what really matters is their access to the underlying supercomputing centers themselves. Well, that’s a problem, too, because that capacity is increasingly widely distributed across the globe. As of June 2022, 173 of the world’s 500 most powerful supercomputers were located in China, according to Statista. But the more important fact to note is that the rest of the world is advancing their own supercomputing capabilities. Some analysts have wondered whether we’re hitting a wall in terms of aggregate compute, as costs and supply chain problem create bottlenecks or other limitations on growing AI capabilities. But the world isn’t sitting still. Firms and governments are making massive investments across the globe.

Meanwhile, in this essay I have largely ignored the potential for mass evasion efforts to develop in response to regulation. But we could imagine a future of underground black markets developing for banned computing or chip hardware. (Hold on to your old GPUs, folks!) Tyler Cowen alludes to the potential for underground markets in this essay. In this sense, it’s also worth monitoring how China is getting around new US export controls on AI chips (and how China is selling chips to Russia despite global sanctions) because this sort of activity foreshadows the enforcement challenges that lie ahead for global AI control efforts.

Congress Unlikely to Act

Of course, in closing, we need to remember that, for any of this proposed AI regulation to take hold in the United States, it would require congressional action to enact what would be a sweeping new law that either (1) authorizes the idea of global government control over America’s supercomputing capacity, or (2) creates and funds a new domestic AI regulatory agency and corresponding regulations.

Needless to say, the chances of this happening anytime soon are slim to none, and Slim will definitely be leaving the building soon as the next presidential election cycle approaches. But even when Congress gets back to work again post-election, it is unlikely that America’s highly dysfunctional and insanely partisan legislative branch will be able to actually get anything done along the lines that Microsoft suggests. Remember: This is a Congress that hasn’t even been able to get a baseline privacy bill or federal driverless car law done even though those two efforts enjoy widespread bipartisan support.

Perhaps that situation will change at some point, but I sincerely doubt it for all the reasons I laid out in my AEI report on, “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” As I noted there, it’s not just that Congress is so damn dysfunctional, partisan, and slow. It’s that technology is also moving so much faster at the same time. This so-called “pacing problem” — the relentlessly growing gap between the constantly expanding frontier of technological possibilities and the ability of governments to keep up with the pace of change — has become a chronic issue for congressional lawmaking abilities.

The combination of these factors means that, by necessity, technological governance is having to evolve to tap solutions that are more informal, iterative, experimental, and collaborative. Therefore, to the extent that any corporate-inspired AI policy frameworks gain traction, it is far more likely that the proposals outlined recently by Google and IBM will be the more likely outcome. Their policy frameworks focus on risk-based, context-specific, and more targeted interventions (IBM calls it “precision regulation”), as compared with Microsoft and OpenAI’s call for comprehensive computational control. The latter is just too heavy of a lift for Congress — and too radical for most others in industry.

Again, we’ll need to be more open-minded and sensible in our thinking about wise AI governance. Grandiose and completely unworkable regulatory schemes will divert our attention from taking more practical and sensible steps in the short term to ensure that algorithmic systems are both safe and effective. AI “alignment” must not become a war on computing and computation more generally. We can do better.

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer