Will AI Policy Became a War on Open Source Following Meta’s Launch of LLaMA 2 ?

Adam Thierer
24 min readJul 18, 2023

--

Meta announced today that it is opening up its 70-billion parameter “Large Language Model Meta AI” (LLaMA 2) for open source commercial use and research. This represents a major development in the fiercely competitive artificial intelligence (AI) marketplace, but the move will simultaneously give rise to a variety of policy-related concerns as the debate over the deployment of powerful algorithmic systems intensifies.

Meta’s LLaMA launch will also become an important moment for open source AI governance, not only because LLaMA is a huge foundation model, but also because Meta is a major tech player with plenty of others in industry and government looking to take them down on other grounds. A heavy-handed approach to regulating LLaMA could have broader ramifications for other open source players and platforms. The danger exists that AI policy could be on the way to becoming a broader war on computation and open source systems in particular. That would be a disaster for innovation and competition — and safety — in this space.

The Continuing Importance of Open Source

First, a few basics. When we talk about open source AI or open machine learning (ML), we are generally talking about digital systems that anyone can access and modify freely. Luis Villa has provided the best brief overview of the field that I’ve seen so far. Villa correctly observes that, “there is no formal definition of ‘open’ in the machine learning space yet,” but he uses the phrase ‘open ML’ to refer to “machine learning development processes that allow for collaborative participation and iterative improvement.” That’s as good of a definition as I’ve seen so far but, much as definitional headaches haunt AI and ML more generally, there are continuing disputes about the contours of “open AI systems.”

Regardless, by all accounts, open ML and open source AI more generally are absolutely exploding. Many of you have probably read the leaked internal Google memo from May entitled, “We Have No Moat, And Neither Does OpenAI.” For Google and other leading AI developers, the memo represents a sobering assessment of just how precarious their market position is in this field because of the growing competitive threat posed by open-source models. “Open-source models are faster, more customizable, more private, and pound-for-pound more capable,” the leaked document said. “They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.” Basically, a customized open source model can have fewer parameters (model weights and instructions) but lots of “tokens” (individual characters or pieces of data) and still produce very impressive outputs. And that’s what a lot of current open source AI providers are doing, although every system is different and Meta’s LLaMA is absolutely huge.

The leaked memo continued on to discuss the “profound implications” of all this open source innovation for Google and other major vendors of large models. “People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality,” it said. “We should consider where our value add really is.” The memo suggests that value is not with the bigger parameter models that Google and other companies are creating. “Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly,” it concludes.

This is a rather shocking admission, but one that is understandable if you’ve been following the astonishing pace and nature of all the open-source innovation unfolding today. Meta had already privately shared a previous version of LLaMA with researchers earlier this year but, unsurprisingly, it was quickly leaked and widely distributed to the general public. This led to rapid-fire iteration with the model as developers fine-tuned LLaMA in creative ways to produce many bespoke offerings. As they did so, the global community of digital tinkerers kept finding ways to make forked versions of LLaMA more efficient at lower parameters and lower cost.

This is true of the broader market for other open source models. Just how fast is the cost of training open source AI dropping? Well, consider MPT-7B a model that was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of around $200,000. In terms of capabilities, it competes with many of the other LLaMA forked variants already out there.

This is the power of iterative open-source learning in action once again. The beneficial feedback loop that comes from constant experimentation and iteration leads to increasingly useful, powerful, and affordable algorithmic offerings. This is the same process that has been playing out more generally with open source technologies over the past three decades, which explains why they have become the backbone of countless internet systems and applications. It is not an understatement to say that open-source systems and efforts represent one of the greatest examples of decentralized innovation the world has ever seen.

And when it comes to open source AI, the future looks even more exciting. Hugging Face, a leading open source platform for hosting ML models, is on its way to a $4 billion valuation with its latest VC funding round. Replit, an open cloud software development platform, recently noted on its corporate blog that, “In Q2 ’23, we surpassed 5k projects using open-source models. The cumulative number grew 141% QoQ. Over 70% of the projects leverage Hugging Face, but Replicate usage grew almost 6x QoQ. Replicate has templates to run ML models on their verified Replit profile. The Hugging Face verified Gradio template has +600 forks.”

Source: Replit

And have you heard of Falcon 40B, a 40-billion parameter model sponsored by the United Arab Emirates’ Technology Innovation Institute? This large-scale open-source AI model is now royalty-free for commercial and research use. It competes favorably with leading proprietary models. Meanwhile, it seems like everyone missed the remarkable things happening with Together.ai’s “GPT-JT” fully decentralized open source model. It launched late last year and is trained on a small 6 billion parameter model using cobbled-together computing infrastructure, and yet it somehow hanging with the big dogs in terms of real-world results.

Some of the other key names to know in the open source space include PyTorch, TensorFlow, Nomic AI, Apache MXNet, EleutherAI, Okra, and Stability AI, maker of StableLM and Stable Diffusion. There are so many other open source AI tools, models, developers, and enablers that it can make you dizzy trying to sort through them all. I have tried repeatedly to map out today’s open source AI ecosystem only to be completely overwhelmed by the task because of the sheer volume of activity and players out there currently. If you care to see a running list of open-sourced fine-tuned LLMs, Sung Kim has got you covered. He notes, however, that the list is incomplete because of how many new models are being introduced on a daily basis and most are being created for less than $100. Betsy Masiello and Derek Slater have a really nice summary of other recent developments in the open-source AI space that I highly recommend.

Bottom line: Open-source innovation is now set to revolutionize the world of artificial intelligence just as it did the digital systems and software of the internet’s early history — if it is allowed to, that is. Many industry incumbents and policymakers will push back against these innovative open source developments on various grounds. Meta’s announcement today will likely super-charge these issues and the debate around open source AI more generally.

What about Those Safety Concerns?

Open source AI could get regulated in many ways. There are a variety of ethical and safety-related concerns driving AI policy today, so regulation could come from different angles and quarters.

First, it should be noted that Meta has committed to responsible use guidelines via an acceptable use policy for use of LLaMA 2. [Meta also released a longer technical paper about the model and its use.] If users agree to specific use terms, they will have access to build upon the full model. Meta also says less powerful versions of LLaMA (weighing in at 7 and 13 billion params) are being made available. There are custom LLaMA licensing stipulations with bespoke arrangements for larger companies versus smaller ones. Licenses could be revoked by Meta for violations, although it will not be possible to completely lockdown all derivative downstream versions of it once the model is in the wild. [All the LLaMA licensing details and the accompanying Acceptable Use Policy can be found here.]

Meta has also committed to transparency and red-teaming steps to help ensure that their model is aligned with various ethical guidelines. The hope is that, by making LLaMA widely available, it will improve it (and its many derivatives) faster both in terms of its capabilities, but also in terms of the ability to deal with alignment issues. But while real-time stress testing of models with constant refinement through both public and private red-teaming can certainly advance that goal, once again, there is no way Meta will be able to completely control all downstream uses of the model once it is out there. That’s just the nature of open source more generally. It’s a constant give-and-take process of iteration and fine-tuning. The process has upsides and downsides because, by their very nature, less centralized systems rely on trial-and-error and the wisdom of crowds to work collaboratively toward making systems better along multiple dimensions.

Open source opponents or skeptics will claim that is too risky of a proposition and call for top-down controls that could make open systems all but impossible in practice. When LLaMA was leaked earlier this year, The Economist quoted Anthropic’s Jack Clark describing open-source AI as a “very troubling concept.” The magazine noted that incumbents “have much deeper pockets than open-source developers to handle whatever the regulators come up with,” and “They also have more at stake in preserving the stability of the information-technology system that has turned them into titans.”

Open source AI will also raise political concerns on various other grounds both here in the U.S. and abroad. The European Union is already well on the way to regulating AI and open source through the comprehensive AI Act it has been formulating. Writing for the Center for European Policy Analysis blog, Pablo Chavez notes of the EU approach toward open source that: “The proposed parliamentary language would impose significant compliance requirements on open-source developers of foundation models, including the obligation to achieve performance, predictability, interpretability, corrigibility, security, and cybersecurity throughout [their] lifecycle. Realistically, only an organized and well-funded — and perhaps European — open-source project could meet these obligations.”

This has led some open source defenders to propose a “two-tiered approach” to the regulation of foundation models, with open source developers enjoying more policy flexibility or assistance than proprietary models. Masiello and Slater outline this idea as follows:

“If supporting entry from new and smaller entities is a policy goal, then regulation of AI must also be proportionate and tailored so that it doesn’t create an undue barrier to entry. While that can mean exempting certain regulations below a threshold, it can also mean specifically supporting all entities in compliance. Solutions that ensure smaller-scale AI providers and open source models can incorporate trustworthy and safe design principles into their development, and also comply with regulatory requirements such that they are subject to adequate oversight, will help new entrants overcome initial regulatory hurdles as they get started.”

There are plenty of past precedents for this from other policy contexts. We accord different tax treatment to non-profits and charitable organizations, for example. Meanwhile, for private firms of different sizes, many tax and regulatory policy determinations are made using employee thresholds. Firms with under 50 employees, for example, are often exempt from taxes or regulations that hit larger firms, or the smaller firms at least enjoy somewhat less burdensome rules.

But a two-tier approach to open-source AI regulation seems unlikely to fly in the E.U. and it’s doubtful it will in the U.S. either. Many proprietary model developers will argue that open source is more dangerous by nature than closed models. Whatever one thinks about that issue, one thing is clear: The open source community has enough potential foes that the community will be hard-pressed to sell policymakers on the idea that they should receive more favorable regulatory treatment when everyone else is working against them or at least not going to bat to help them. As I told Sharon Goldman of VentureBeat in a recent story she wrote about the politics of open source:

“it’s easy for both government officials and proprietary competitors to throw open source under the bus, because policymakers look at it nervously as something that’s harder to control — and proprietary software providers look at it as a form of competition that they would rather just see go away in some cases. So that makes it an easy target.”

Frontiers of Computational Control

It could be that advocates of new regulations for high-powered “frontier AI” systems just side-step the debate over open source by trying to regulate all AI systems on either compute-based or capabilities-based thresholds, knowing that both will hit open source hard, anyway. I discussed these distinctions in my earlier essay about, “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.” Microsoft’s recently proposed framework for licensing frontier AI systems would begin by regulating systems under a compute-based threshold by basically declaring that some arbitrary number of parameters and tokens constituted “highly capable AI model.” But Microsoft says that, while regulation should start there, policymakers should “commit to a program of work to evolve it into a capability-based threshold in short order.”

A cynic might claim there’s a logical reason Microsoft wants a capability-based licensing threshold: They realize that, as was noted above, today’s fine-tuned open-source models are already able to compete with larger parameter models. Thus, using raw computation as a trigger for regulation would mean those smaller rivals would be able to catch up to players with larger compute more easily because they would probably fall under whatever threshold was set by law. Moreover, because open-source developers have gotten so good at fine-tuning models, they would also likely be able to calibrate them to fall under that regulatory threshold and avoid the need to get a compute-based government license.

That is one reason why Microsoft and some academics who favor frontier AI licensing are proposing a regulatory system for high-power AI models that is instead defined by a far more subjective basket of “capability-based” metrics. As I noted in my earlier essay about the Microsoft plan, a capability-based regulatory trigger is a very open-ended one because it means governments will have to define which specific model capabilities are “dangerous” and then figure out how that translates into formal regulatory prohibitions.

A recent paper on “Model Evaluation for Extreme Risks,” which was written by over 20 AI governance experts, discusses some of the potential capabilities that policymakers would need to define for purposes of crafting a capabilities-based set of regulatory triggers. Those potential regulatory triggers include things like persuasion, manipulation, disinformation, and political influence. Well, good luck defining all these things politically! If you’ve spent any time monitoring the recent debates about how to define “disinformation” in the U.S., then you know what a contentious political fiasco we are in for once congressional committees start meeting to hash out how to define all the “dangerous capabilities” of algorithmic systems.

I just cannot see how that process yields workable regulations. Importantly, some scholars such as Arvind Narayanan and Sayash Kapoor as well as James R. Ostrowski have pointed to evidence that the AI disinformation threat may be considerably exaggerated. Finally, at some point, efforts to define things like “persuasion,” “manipulation,” “disinformation,” and “political influence” for purposes of regulating AI models could give rise to very serious free speech problems, which will end up being litigated for many years to come in the U.S. on First Amendment grounds. That makes capabilities-based frontier regulation even less workable in the short-term, no matter how well-intentioned it may be.

The Broader War on Computation That Looms

But, for the sake of argument, let’s walk through the ramifications of frontier AI licensing or regulation and specifically ask what it might mean for LLaMA and open source AI more generally. While some AI regulation is likely coming, the danger exists that extreme forms of regulation could derail the enormous benefits associated with AI/ML technologies, which could transform public health and welfare for the better in numerous ways. AI holds the promise of helping with earlier disease detection and treatment, safer transportation options, clean energy innovations, and more.

These positive developments are far less likely to come about if AI policy becomes an all-out war on computation and algorithmic innovation, however. A “war on computation” might sound hyperbolic at first, but what else are we to call it when major media outlets are running essays by people who call for global surveillance systems, tracking all computer chips, global pauses on development, massive new global regulatory bureaucracies, or even stopping all AI development by bombing data centers.

In my earlier essay in which I worried about how open source AI could become the first major casualty of this new war on compute, I outlined some of the other outlandish regulatory ideas floating out there today. These include the possibility of quasi-nationalization of the largest computing systems as well as the idea of forcing all high-powered frontier systems to be contained on a government-controlled “island,” where only certain authorized officials could even enter the “air-gapped” facilities.

Interestingly, the author of that “AI Island” proposal was recently appointed to be the first Chair of the UK’s “Foundation Model Taskforce,” which has an initial £100m of funding to study safe development of AI in the UK. No word yet on whether he will be looking to make the UK into an actual AI island! But we’ll know more soon because a new global AI super-bureaucracy could be formulated through a new joint US-UK effort. “Britain’s priorities are to agree [to] a joint approach to regulation — with a global regulator ideally based in the UK,” a recent news story notes. This follows talks between the heads of the two nations to include discussion about AI regulation and an upcoming AI safety summit this Fall. Meanwhile, OpenAI and DeepMind have already been pressured to “open up models to UK government” which foreshadows how “regulation-by-intimidation” could become another way large AI models get indirectly controlled by governments — and how open source models get discouraged in the process. (Incidentally, is the UAE government going to go along with frontier licensing restrictions imposed on Falcon by far-off Western leaders? Will China go along with any of these rules? It’s something to ponder when academics float grandiose global governance schemes for AI systems that have almost zero chance of covering nations such as those and others. I wrote a long paper about why global AI arms control is likely going to fail miserably if you care to learn more about the “realpolitik” of international AI governance.)

But let’s get back to what all this talk of licensing “highly-capable” models means for LLaMA and open-source AI more generally. In an important recent essay on “AI Safety and the Age of Dislightenment,” Jeremy Howard argues, that “[t]he effects of these regulations may turn out to be impossible to undo, and therefore we should be extremely careful before we legislate them.” Howard is the founding researcher at FastAI, the most widely-used deep learning course in the world. He argues that:

“If we regulate now in a way that increases centralisation of power in the name of ‘safety’, we risk rolling back the gains made from the Age of Enlightenment, and instead entering a new age: the Age of Dislightenment. Instead, we could maintain the Enlightenment ideas of openness and trust, such as by supporting open-source model development.”

Howard is specifically responding to an important new paper on “Frontier AI Regulation: Managing Risks to Public Safety,” which was written by Markus Anderljung and 23 other leading academic and corporate-affiliated advocates of more aggressive frontier AI regulation. While saying that they believe that “open-sourcing AI models can be an important public good,” those authors continue on to argue that, “it may be prudent to avoid potentially dangerous capabilities of frontier AI models being open-sourced until safe deployment is demonstrably feasible.” In other words, highly capable open-source models likely would require a license to operate or at least be hit with a wide variety of downstream limitations that would essentially make open-sourcing them impossible to begin with. Presumably, at least LLaMA 2 — with its 70B params and whopping 2 trillion training tokens — would be covered by their proposal. There are many other academics floating similar regulatory ideas today.

Howard pushes back aggressively against this thinking, saying that the ideas being advanced by these academics and companies will, “ultimately lead to the frontier of AI becoming inaccessible to everyone who doesn’t work at a small number of companies, whose dominance will be enshrined by virtue of these ideas. This is an immensely dangerous and brittle path for society to go down,” he says.

In the technical paper accompanying the launch of LLaMA 2 today, Meta adopts the same stance in its “responsible release” strategy section. It’s worth quoting a passage from it at length because this is about as full-throated of a defense of open source as I’ve ever seen and it squarely addresses the argument by regulatory advocates that open source is somehow more dangerous than other AI models:

“While many companies have opted to build AI behind closed doors, we are releasing Llama2 openly to encourage responsible AI innovation. Based on our experience, an open approach draws upon the collective wisdom, diversity, and ingenuity of the AI-practitioner community to realize the benefits of this technology. Collaboration will make these models better and safer. The entire AI community — academic researchers, civil society, policymakers, and industry — must work together to rigorously analyze and expose the risks of current AI systems and to build solutions that address potentially problematic misuse. This approach not only fosters real collaboration with diverse stakeholders — those beyond the walls of big tech companies — but also serves as the cornerstone for democratizing access to foundational models.

[…] open releases promote transparency and allow more people to access AI tools, democratizing the technology and decentralizing AI expertise. We believe that the decentralization of AI expertise does more than simply distribute knowledge — it stimulates innovation and accelerates progress in the industry. Lastly, openly releasing these models consolidates costs and eliminates barriers to entry, allowing small businesses to leverage innovations in LLMs to explore and build text-generation use cases. Ultimately, we believe this will create a more level playing field for organizations of all sizes across the globe to benefit from the economic growth promised by the advancement of AI.” (p. 35)

While some might find this paragraph hard to believe — coming as it does from a company the size of Meta — this seems to a genuine commitment to open source AI, as well as a clear challenge to other large AI developers like Microsoft, IBM, Google, and others. Still, there will be a real tension in the open source world because some in that community will not trust Meta to be the torch-bearer in terms of open source evangelism. And yet, here we are, with Mark Zuckerberg and his company making one of the biggest commitments to open computing systems in history.

But, again, because Zuckerberg and Meta (which used to be Facebook, of course) have a global enemies list that is a few miles long, the long knives are going to come out for LLaMA. It remains unclear how, precisely, they’ll all come after LLaMA, but one danger is that lawmakers threaten Meta with comprehensive liability for all potential downstream derivative uses of the model.

In a recent report for the Lovelace Institute on “Allocating accountability in AI supply chains,” Ian Brown concludes with some excellent insights about the dangers of this approach. “While it would be possible for legislation to go further in applying obligations to online distribution of open-source AI components,” he argues, “its likely efficacy would be severely open to question, given the following observations:

· Without comprehensive international agreement (which is difficult to imagine in the current geopolitical climate), unrestricted development and sharing would be likely to continue in other jurisdictions (including the USA, whose constitution includes strict restrictions on government limits on publication).

· The underlying techniques and data used for training models are likely to continue circulating freely […]

· Such restrictions would be likely to significantly impede the pace of research and development relating to AI tools and techniques, including those to identify and remedy potential harms, particularly outside of the large companies which already and increasingly dominate AI research.”

Brown then concludes by rightly reminding us that, at some level, many of the regulatory proposals being floated today for open source AI raise the specter of a return to the U.S. experience with cryptography control efforts back in the 1990s. “While not a precise analogy (because large AI models are much more complex and resource-intensive to create than encryption software), attempts by the USA and its allies to control the global spread of encryption technology throughout the 1980s and 1990s ultimately failed for similar reasons,” he says.

Indeed, there are some very interesting parallels here. We could be on the way to policymakers treating high-powered AI systems (especially open source systems) as “munitions,” just as the U.S. did with encryption before abandoning that misguided effort at the turn of the century. Meanwhile, the proposals being floated to have government lock down and tightly control access to frontier AI labs and data centers (plus chip-level surveillance / tracking schemes) reminds me a bit of the U.S. government’s push for the ‘Clipper Chip’ in the mid-1990s to control access to cryptographic systems. In both cases, extreme regulation of open systems comes down to an all-out government war on computation at some level.

The Path Forward

Where do we go from here, then? Is there any way to balance concerns about safety and innovation without completely throwing open source systems under the bus? Andrew Maynard, a scholar at Arizona State University’s School for the Future if Innovation in Society, reflects on this debate in a new essay on, “Regulating Frontier AI: To Open Source or Not?” He says:

Frontier AI models promise to be profoundly transformative — that’s not in dispute here — and navigating their safe and beneficial development and use is going to be fiendishly complex. As a result, there will be no single silver bullet to their responsible development or their effective regulation. Rather, we’re going to have to hash this out together to find solutions that work.

Maynard notes that the two sides in this debate can at least generally agree that a combination of humility, broad consultation, and ongoing multistakeholder conversations, “are all important steps toward socially responsible innovation.” That’s the way forward: muddling through with sensible, real-time, collaborative, multistakeholder solutions, married up with targeted laws to fill gaps or address sensitive matters.

For that to happen, however, we’ll have to reject the all-or-nothingism that we sometimes see at the extremes of the AI governance spectrum. As I identified in this little slide show I recently posted, much of the debate about AI governance is dominated by the most extreme (or “absolutist”) people and perspectives. Ironically, those extremes meet at some point by speaking in highly deterministic ways about the inevitability of powerful AGI and they both advocate overly-sweeping attitudes and policy perspectives.

By contrast, the many “AI realists” out there in the middle resist those extremes and understand that laws, norms, standards, and other forces will shape the future of AI. Of course, there is a lot of disagreement about how, but there is at least more of a willingness to discuss sensible governance steps to help balance safety and innovation. As Maynard suggests in his essay, a lot of it comes down to figuring out how more agile “soft law” mechanisms can get help address many problems and then determining where hard law fills other gaps. (I devoted 40 pages and 20,000 words to trying to figure out that balance in my report on, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”)

Of course, all this is a lot harder for open systems. However, what Meta is launching with LLaMA 2 is more regulable than other open source AI systems simply because Meta is Meta. Policymakers know how to find them and bring the hammer down on them if they want to crush open source AI by going after the biggest model on the market. That won’t be the end of all open source AI, but it would help solidify a cozy little computing cartel of just a couple of federally-licensed proprietary providers running the biggest foundation models on the market. Many of those forked LLaMA derivatives would dry up pretty quickly and leave many developers scrambling to find their compute somewhere else, perhaps offshore or by cobbling together systems in other ways.

Even short of full-blown licensing, the use of political jawbowing and “regulation by intimidation” will also come into play. Even if formal action is not taken directly against open source systems as such, Meta will come under enormous political pressure from policymakers both here and abroad to control downstream uses of LLaMA, which will undermine the very point of it all. Zuckerberg can expect to be hauled in front of Congress any time some downstream forked version of LLaMA gets used in a controversial fashion, even though Meta can’t possibly control all those potential uses.

​​Again, the threat of expanded liability also looms large here. In early June, Sen. Josh Hawley (R-Mo.) set forth objectives for AI legislation that began with the idea of expanded lawsuits and ends with a call for new federal regulatory licensing regime for artificial intelligence. It might be tempting for some to dismiss a firebrand like Hawley, but remember that, at the same time he was releasing these principles, he was also joining forces with Sen. Richard Blumenthal (D-Conn.) to send a letter to Meta about the earlier version of LLaMA. And that was simply after the model was unintentionally leaked. Now that LLaMA 2 is officially out, you can count on Hawley, Blumenthal, and countless other lawmakers and regulators (especially at the Federal Trade Commission) to be firing off regular missives to Meta telling them to pull back access to their model or “self-regulate” in ways that make open source AI all but impossible. It could be that kids’ safety concerns and copyright concerns alone take down open-source AI. If lawmakers end up creating some sort of massive notice-and-takedown regime for foundation models or other AI services/applications, that’ll likely be the end of open source AI altogether.

Focus on Uses by Bad Actors

To head off some of these problems, the open source community will need to work harder to get serious about best practices and acceptable uses, and Meta has done an excellent job facilitating that effort with the guidelines it released today. But the open source community will also need to speak to the various concerns that are driving regulatory proposals today. As they do, they should take their cue from Howard, who rightly stresses that the key issue here is, “the distinction between regulating usage (that is, actually putting a model into use by making it part of a system — especially a high risk system like medicine), vs development (that is, the process of training the model).”

I discussed this same principle in my recent essay on “The Most Important Principle for AI Regulation,” in which I noted that the AI governance should be risk-based and should focus on system outputs/outcomes instead of system inputs/design. Or as ITIF scholars state this principle, “Regulate performance, not process.” I continued on in my essay on this to explain how:

“A process-oriented regulatory regime in which all the underlying mechanisms are subjected to endless inspection and micromanagement will create endless innovation veto points, politicization, delays and other uncertainties because it will mostly just be a guessing game based on hypothetical worst-case thinking. We need the opposite approach… which is focused on algorithmic outcomes. What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner. A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it. This principle is the key to balancing entrepreneurship and safety for AI. “

This is the only practical way to make progress on AI governance. As Howard correctly notes, “because we are discussing general-purpose models, we cannot ensure safety of the model itself — it’s only possible to try to secure the use of a model.” If bad actors use any general-purpose technology to harm the public, we should go after them and not the underlying technology or technological process itself. Importantly, as I documented in my essay on this topic, this is how we are already regulating some algorithmic technologies today through many existing agencies and bodies of law.

This is the way forward to balance safety and innovation — and the way to save open source AI from the near-certain death sentence that would follow from any attempt to impose preemptive, heavy-handed, top-down government mandates on algorithmic processes and open systems. Meta has just become the canary in the coalmine for all this, and how policymakers respond to the launch of LLaMA will tell us whether an all-out war on computation really is on the way.

Additional Reading:

--

--

Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance. https://www.rstreet.org/people/adam-thierer