Is Telecom Licensing a Good Model for Artificial Intelligence?

Adam Thierer
15 min readJul 8, 2023

As I discussed in a recent essay here and another one with Neil Chilson, we are hearing more calls from policymakers and academics to apply some sort of licensing scheme to artificial intelligence (AI) systems. Details tend to be murky, with most pundits casually proposing AI licensing without really explaining how it would work in practice. However, some people and organizations are starting to get a bit more specific, or they are at least suggesting that we consider borrowing some licensing systems from other sectors as the model for regulating AI.

For example, in late May, Microsoft released a white paper entitled, “Governing AI: A Blueprint for the Future,” which called for a licensing system “much like the regulatory model for telecommunications network operators,” but then also with a dose of financial services regulation and export controls included. The Microsoft blueprint represents a sweeping call for computational control, and you can consult my earlier essay about it or this podcast discussion to hear some of my concerns about it.

In this essay, I just want to focus on the narrow question of whether or not it is a good idea to make telecommunications and media industry licensing the model for algorithmic systems and computing more generally. I’ve spent over 30 years now covering ICT policy and much of it has been focused on the unintended consequences of licensing regimes. Our experience with licensing in telecom and media sectors should be a cautionary tale, and not a model for high-tech governance of fast-moving emerging technologies like artificial intelligence and robotics.

We should never forget that, at root, the purpose of licensing in any context is to limit and exclude. A license limits the scope of permissible operations and innovations, and it excludes certain current or potential future players or technologies. In essence, licensing generally treats innovations and new potential competitors as guilty until proven innocent. Innovation is a flatly criminal act under the strictest licensing regimes.

Economists and political scientists have studied licensing practices extensively for a century and the academic literature they have produced is unambiguous. They have documented how, as a result of these limits and exclusions, licenses tend to raise costs, limit competition and new entry, discourage innovation, and create various other political problems, including special interest influence or capture of regulatory processes. In certain cases, licenses can also limit freedom of speech by artificially curtailing who has the right to say certain things or communicate information to the public. All these things undermine the public good and even betray important rights, including the freedom to speak, the freedom to learn, and the freedom to innovate.

Therefore, limiting and excluding innovations or innovators should always be a last resort, not the first. It makes no difference that licensing regimes are often established with the very best of intentions in mind. All the good intentions in the world are meaningless in the face of real-world results. And the real-world results of telecommunications and media licensing have been a disaster.

By extension, because the history of analog era licensing regulations demonstrates that those systems had such deleterious impacts in practice, they should absolutely not be a model for other digital age sectors, which are even faster-moving and more complicated than the far simpler technologies we were trying to control in the past. Moreover, there are usually far better ways to address risks and harms than through licensing.

I have done a lot of writing on the history of telecom and media licensing going all the way back to my 1994 history of telecommunications industry monopolization, which was entitled, “Unnatural Monopoly: Critical Moments in the Development of the Bell System Monopoly.” I also wrote a book discussing the many downsides of media industry licensing. Recently, I published an R Street Institute report on AI governance that included “A Brief History of America’s Disastrous Analog-Era Policy Regime” as well as a short section on “The Two Great Lessons of Analog-Era Tech Policy.” In those sections, I attempted to produce the most concise explanation of this history of regulatory failure. I have reproduced those two sections here to remind policymakers and pundits that it would be unwise for the United States to use telecommunication and media industry licensing schemes as a model for AI policy.

A Brief History of America’s Disastrous Analog-Era Policy Regime

For most of the past century, a convoluted thicket of federal, state and local regulations controlled activity in the ICT sector. The laws and regulations governing the technologies of the analog era (i.e., newspapers, magazines, wireline telephony, broadcasting and cable) were intended to support “the public interest,” but all too often failed to do so.[1] These governing mechanisms included operating licenses, line-of-business restrictions, price controls, rate-of-return regulations, technical device/equipment regulations and various quality-of-service or access requirements.[2] In the mid-1990s, however, U.S. policymakers adopted a series of bipartisan reforms and policy statements that largely rejected the nation’s analog-era regulatory regime and that signaled a new direction for the governance of ICT.[3]

It is important to understand why experts came to view the policies of the era as a failure so that America does not repeat the mistakes of the past when considering the governance of new technologies like AI and robotics. In short, analog-era regulatory policies created a rigid innovation cage that severely constrained entrepreneurial activities and competition. Although America’s old communications and media regulatory system stopped short of the full-blown nationalization pursued by other countries, the alternative that policymakers created was not much better for innovators or consumers.[4] The result of this complex web of regulations was less ICT innovation, fewer choices and higher prices for lackluster service.[5] In that regulatory environment, “innovation” was often defined as a longer cord or different color on the telephone handset provided by the phone monopoly. No one could shop around for better options because none existed. In most instances, it was illegal to compete, and new services were treated as guilty until proven innocent under a precautionary, principle-based regime that valued stability over market dynamism.

The way in which the public interest was defined shifted with the political winds to suit the whims of those in power — both in government and in industry — at any given time. The “public interest” was also regularly invoked as a rationale for censorship and a way to evade the First Amendment.[6] Policymakers created a chaotic legal standard for speech, which held that something written in a newspaper or book enjoyed robust First Amendment protection while the uttering of the same words on a broadcast television or radio station would result in the revocation of an operator’s license.[7] Federal Communications Commission (FCC) regulators also used the agency’s open-ended public interest authority to influence media companies through what came to be known as “regulation by raised eyebrow,” or “regulatory threats that cajole industry members into slight modifications” of their programming.[8]

It seemed abundantly clear that the public interest standard had little to do with what the public actually wanted — more competition, more choices and more diverse content.[9] Instead, public interest regulation protected large telecommunications and media companies from new rivals and the need to innovate.[10] Lawmakers and regulators repeatedly erected barriers to new types of competition and technological change.[11]

In the 1950s, for example, misguided FCC policies prevented the emergence of DuMont, an aspiring nationwide television network.[12] Regulators took steps to constrain spectrum allocation and steer licenses away from national or regional TV networks, often in the name of encouraging more media “localism.”[13] Unfortunately, those efforts greatly limited the emergence of competitors like DuMont, who could not satisfy the FCC’s rigged preferences.[14] Consumers would have to wait another 30 years before Rupert Murdoch would launch Fox as a fourth national network in the mid-1980s, and even that effort was resisted initially by some policymakers.[15] Similarly, the advent of cable and satellite television was initially met with regulatory roadblocks as broadcasters lobbied for continued protection from competition.[16] In addition, as recently as the mid-2000s, terrestrial radio broadcasters lobbied heavily to stop satellite radio operators from launching competing services on the grounds that it might undermine media localism.[17] Thankfully, that anticompetitive protectionist effort failed.

Having created monopolies through misguided regulations, policymakers also took some steps to control them through many additional layers of convoluted rules that sought to limit the market power or ownership rights of many firms.[18] Regulation thus became a self-perpetuating cycle as more and more rules were added over time to address problems created by earlier misguided mandates and illogical interventions.[19] As a result, regulatory accumulation became a chronic problem in the ICT sector, and it remains so today for traditional telecom and media services. The FCC currently ranks first among independent regulatory agencies in terms of rules promulgated, outpacing the Securities and Exchange Commission and Nuclear Regulatory Commission, among others.[20]

It is hard for law to restrain markets and innovation forever, however, and beginning in the 1970s and continuing into the 1980s, small cracks began to appear in the old regulatory edifice.[21] Once cable television and wireless networks became feasible and then increasingly popular, incumbent operators and regulators could not contain them entirely; they could only slow their advance.[22]

During that same period, a more significant technological storm was gathering that the FCC and other policymakers had even less ability to constrain: the data and computing revolutions. Whereas cable, satellite and wireless innovators were cursed to be “born in regulatory captivity” (i.e., they were immediately confronted with the existing thicket of regulations and agencies), the data and computing sectors were largely “born free” of any preexisting sectoral rules or regulators.[23] Instead, common law rules and general consumer protection laws and agencies covered these technologies.

Being “born free” constituted a major strategic advantage because it gave entrepreneurs in those new sectors greater breathing room to innovate without prior restraint. In the 1970s and 80s, for example, innovators like Steve Jobs of Apple and Bill Gates of Microsoft did not need to seek out prior approval, such as a license or other operating permit, to launch a new line of computers or software programs. They just did it because they were free to do so — and that is still the case for most computing and online services today. In other words, these digital innovators were not stuck in the captivity of an innovation cage; they enjoyed the freedom to innovate that accompanies an open, dynamic innovation culture.

Once entrepreneurs and consumers experienced the benefits of market-based innovation, the new technological era of permissionless innovation was underway. Policy actions were still needed, however, to ensure a new pro-freedom innovation culture could take hold.

[Note: My study next explains how, by the mid-1990s, most industry analysts and many policymakers were raising questions about the failures of the traditional regulatory regime and why they took steps to abandon it. These experts came to view the old regulatory system with greater suspicion once its costs became evident. They they worked in a bipartisan way to improve American innovation culture for the better by reforming or eliminating many licensing requirements and, most importantly, deciding not to apply those disasterous old regulations to the Internet, e-commerce, or digital technologies.]

The Two Great Lessons of Analog-Era Tech Policy

The history of America’s analog-era policy regime offers two crucial lessons for technological governance more generally:

1) First, no matter how well intentioned any rules may be, preemptive prior restraints on innovative activities will generate many different costs and unintended consequences. Just because someone claims that something is “in the public interest” does not automatically mean it is. Real-world results matter more than good intentions.

2) Second, we cannot pursue better market-driven ways to address important policy goals when heavy-handed regulation makes them difficult or impossible. The touchstones of good policy are humility and flexibility.

Consider how both lessons were evident in the past — but also largely ignored — in the context of wireless spectrum and universal service policy.

· Spectrum policy: In the 1950s and 60s, an economist who proposed property rights and auctions to better allocate wireless spectrum was laughed out of the room at a 1959 FCC hearing.[24] At the time, it was thought to be in the public interest to assign spectrum through a top-down licensing regime that tightly limited the use and sale of any wireless service. In essence, it was an inflexible “zoning” regime for spectrum use. While it is impossible to know exactly how much earlier robust, nationwide wireless markets might have developed had policymakers heeded economists’ advice to tap the power of market incentives, it is likely that the opportunity costs of this policy miscalculation were significant. Investments in alternative communications and media platforms, services and devices were delayed for decades until the FCC liberalized spectrum markets and used auctions to allocate wireless services in the 1990s.

To appreciate the true costs of this decision, imagine if the FCC would have possessed authority over the computer sector during the 1950s and used its authority to dictate that only vacuum tube mainframes were “in the public interest” and would, therefore, be federally licensed and regulated. Transistorized computers and the personal computer revolution would have likely been delayed significantly had such a regulatory regime been in place because massive mainframes were thought to be the only machines capable of serious computational tasks.

· Universal telephone service: Another example of good intentions gone wrong involves universal service. Ensuring that the public was connected to basic telephone service was a worthy policy goal over the last century, but it did not need to be limited by inflexible, highly inefficient, top-down regulatory mandates and controls. Instead, policymakers could have opted to “voucherize” universal service assistance, allowing consumers to shop around for telecom and media service alternatives using a means-tested government voucher.[25] Unfortunately, instead of passing out pro-competitive vouchers to generate pro-competitive incentives, governments passed out local monopolies and then demanded that those firms always offer the community basic service.

Imagine if, in the name of ensuring that every community had low-cost food, the first grocery store or restaurant in town had to serve everyone the same (price-controlled) food in exchange for protection from any potential competitors that followed. That would be a highly inefficient way to pursue such goals, yet it was the law of the land for almost a full century for telecommunications in the United States. Things could have worked differently with vouchers. Just as policymakers long ago adopted food stamps to give people the flexibility to buy the food they wanted from the store they wanted, policymakers could have similarly used means-tested “phone stamps” to let households shop around for their communications or media needs. Unfortunately, even now, America is still struggling to find efficient ways to provide broadband access to underserved individuals when pro-competitive solutions could be implemented.[26]

The common themes in both of these examples were mandates over markets; top-down regulatory decision-making over bottom-up, consumer-driven processes; and policy rigidity over flexible experimentation. These policy choices restricted entrepreneurialism, competition and consumer choice in myriad ways. In short, they created a suboptimal innovation culture that had to be abandoned to unlock the full potential of the American ICT sector.

America turned an important corner when policymakers moved away from that regime to close the 20th century and embraced a fresh approach for computing, data services and the digital economy. The defining feature of the new approach was an embrace of permissionless innovation, and a corresponding rejection of the precautionary principle as the default for ICT policy.

Generally speaking, flexible, bottom-up, consumer-driven governance beats technocratic, top-down regulation. America did not need a grandiose regulatory plan or over-arching bureaucracy to guide the development and growth of the internet. In fact, digital entrepreneurialism and online innovation flourished precisely because the U.S. did not adopt such mandates or technocratic agencies.[27] Had America created a Federal Computer Commission or a National Internet Agency, the resulting red tape burdens would have left us no better off than Europe, where mountains of paperwork compliance requirements resulted in a staggering loss of competitive advantage.[28] It is difficult to name any leading global information technology companies based in Europe because heavy-handed regulations and overlapping bureaucracies kneecapped digital entrepreneurs and forced many European innovators and investors to jump the Atlantic and launch their ideas here instead.[29]

Critics will claim that many unforeseen privacy and security problems developed due to the rise of the internet and digital networks. That is true, and we are still devising solutions to many of those issues. But we should not fool ourselves into believing we could have solved all of these problems preemptively through regulatory mandates — at least not without fundamentally stunting the development of digital technologies the same way telecom and media innovation and competition were stifled in the previous century by overbearing regulatory mandates.

We should work through challenges as they come at us, but the right policy default for the internet and now AI continues to be innovation allowed. Entrepreneurs and their creations must be treated as innocent until proven guilty.


[1] Randolph J. May, “The Public Interest Standard: Is It Too Indeterminate to Be Constitutional?,” Federal Communications Law Journal 53 (May 2011), pp. 427–468.

[2] Ithiel de Sola Pool, Technologies of Freedom (Harvard University Press, 1983).

[3] Ryan Hagemann, “We Don’t Need Government to Regulate the Internet,” Real Clear Policy, March 16, 2018.

[4] Adam Thierer, “Unnatural Monopoly: Critical Moments in the Development of the Bell System Monopoly,” Cato Journal 14:2 (Fall 1994), pp. 267–285.

[5] Ibid.

[6] Brent Skorup, “Censored,” Discourse, Jan. 19, 2023.

[7] Adam Thierer, “Why Regulate Broadcasting: Toward a Consistent First Amendment Standard for the Information Age,” CommLaw Conspectus 15:2 (2007), pp. 431–482.

[8] Thomas Streeter, Selling the Air: A Critique of the Policy of Commercial Broadcasting in the United States (The University of Chicago Press, 1996), p. 189.

[9] Adam Thierer, “Is the Public Served by the Public Interest Standard?,” The Freeman 46:9 (September 1996), pp. 618–620.

[10] Peter Huber, Law and Disorder in Cyberspace: Abolish the FCC and the Common Law Rule the Telecosm (Oxford University Press, 1997).

[11] Adam Thierer and Brent Skorup, “A History of Cronyism and Capture in the Information Technology Sector,” Journal of Technology Law & Policy 18:2 (2013), pp. 131–196.

[12] David Weinstein, The Forgotten Network: DuMont and the Birth of American Television (Temple University Press, 2006).

[13] Ibid.

[14] Ibid.

[15] Daniel M. Kimmel, The Fourth Network: How Fox Broke the Rules and Reinvented Television (Ivan R. Dee, 2004), pp. 9–13.

[16] Thomas W. Hazlett and Matt L. Spitzer, Public Policy toward Cable Television: The Economics of Rate Controls (MIT Press, 1997).

[17] Sarah McBride, “Satellite Radio’s New Local Content Riles Broadcasters,” The Wall Street Journal, July 25, 2005.

[18] Adam Thierer and Brent Skorup, “Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals,” Mercatus Center at George Mason University, Mercatus Research (April 29, 2014),

[19] Raymond Gifford, “The Continuing Case for Serious Communications Law Reform,” Mercatus Center at George Mason University, Nov. 9, 2011.

[20] Clyde Wayne Crews, Jr., Ten Thousand Commandments: An Annual Snapshot of the Federal Regulatory State: 2022 Edition (Competitive Enterprise Institute, 2022), pp. 61–62.

[21] Ithiel de Sola Pool, Technologies without Boundaries (Harvard University Press, 1990).

[22] Jonathan Emord, Freedom, Technology and the First Amendment (San Francisco: Pacific Research Institute, 1991).

[23] Adam Thierer, “What 20 Years of Internet Law Teaches Us about Innovation Policy,” The Federalist Society, May 12, 2016.

[24] Thomas W. Hazlett, “The Wireless Craze, the Unlimited Bandwidth Myth, the Spectrum Auction Faux Pas, and the Punchline to Ronald Coase’s ‘Big Joke’: An Essay on Airwave Allocation Policy,” Harvard Journal of Law & Technology 14:2 (Spring 2001), p. 343.

[25] Adam Thierer, “Universal Service: The Fairy Tale Continues,” The Wall Street Journal, Jan. 20, 1995.

[26] Jonathan Cannon, “Broadband Buildout Stymied by Bureaucracy,” R Street Institute, June 8, 2022.

[27] Peter Huber, Law and Disorder in Cyberspace: Abolish the FCC and the Common Law Rule the Telecosm (Oxford University Press, 1997), p. 9.

[28] Adam Thierer, “Why is the US following the EU’s lead on artificial intelligence regulation?” The Hill, July 21, 2022.

[29] Josh Withrow, “Don’t stifle U.S. tech innovation with Europe’s rules,” Detroit News, Oct. 9, 2022.; Wayne T. Brough, “The EU’s Internet Power Play,” Inside Sources, Feb. 27, 2020.

Recent Essays, Events or Podcasts on AI Licensing

For some additional skeptical takes on AI licensing from different perspectives, check out this newsletter post by Timothy B. Lee (“Congress shouldn’t rush into regulating AI”), and also this essay by Sayash Kapoor and Arvind Narayanan (“Licensing is neither feasible nor effective for addressing AI risks.”)



Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance.