The FTC Looks to Become the Federal AI Commission

Adam Thierer
10 min readJul 15


The Federal Trade Commission (FTC) has become increasingly active on artificial intelligence (AI) policy issues and is positioning itself to serve as America’s de facto AI regulator. The agency is doing this without any congressional authorization by stretching its existing authority to cover all things algorithmic.​​ If the FTC’s actions go unchecked, it could result in the unaccountable expansion of agency authority over a massive and important new sector within the U.S. economy. That could set back innovation and competition, undermining consumer welfare in the process.

Under Chair Lina M. Khan, the FTC has been on a seek-and-destroy mission against the tech sector, looking to expansively regulate digital operators using a variety of formal and informal tactics. In the process, Chair Khan has gotten herself in serious trouble with Congress and the courts, who have pushed back against her ambitious efforts and aggressive tactics.

Undeterred by the growing opposition and unprecedented track record of failure, Khan’s FTC has now set its sights on algorithmic innovators. While conversations about AI governance are heating up on Capitol Hill and proposals for new agencies and regulations are being floated, Khan is moving to just do it all by herself.

To be clear, the FTC has a legitimate role to play in addressing “unfair and deceptive practices” (UDAP) that might undermine consumer welfare, regardless which sector it happens in. So, the agency certainly will have some authority over AI when some bad actors use algorithmic systems to deceive or defraud consumers. But Khan’s FTC appears ready to go much further than that.

It is important to remember that the U.S. never had a general-purpose regulator for the precursors of AI, including computing, semiconductors, consumer electronics or the internet. Unsurprisingly, innovation and competition thrived in these “born free” sectors because America made the freedom to innovate our prime directive. In the mid-1990s, policymakers wisely refused to extend analog era regulatory regimes to digital technology sectors. This created a positive “innovation culture” that allowed the U.S. to become a global powerhouse in information and digital technology markets.

According to the Bureau of Economic Analysis, in 2021, “the U.S. digital economy accounted for $3.7 trillion of gross output, $2.41 trillion of value added (translating to 10.3 percent of U.S. gross domestic product, $1.24 trillion of compensation, and 8.0 million jobs).” In the process of generating all that economic activity, U.S. tech companies became household names across the globe, attracting talent and investment to our shores. Almost half of the top 100 digital tech firms in the world with the most employees are U.S. companies, and 18 of the world’s top 25 tech companies by market cap are U.S.-based firms.

Apparently that sort of success cannot go unpunished, however, and so today many pundits and policymakers are calling from command-and-control analog-era regulations to be revived and applied to advanced technologies like AI and robotics.

The Return of Regulation by Raised Eyebrow

While Congress has not yet made a formal move to regulate AI that aggressively, the FTC isn’t waiting around for legislators to act. In a series of recent blog posts, the agency has made it clear it stands ready to use its broad UDAP authority, or other unspecified powers, to go after claims made about algorithmic systems or applications. In April, the FTC also released a joint statement with three other agencies saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination. Again, some of that agency activity might be warranted, but there is a legitimate question about just how expansively the agency can use its authority before it becomes a technocratic regulator in a way Congress never intended.

Consider this leaked letter that the FTC sent ChatGPT creator OpenAI recently. It’s a 20-page fishing expedition that asks almost 50 multi-part questions about everything under the algorithmic sun. There are all sorts of extraneous questions here that have zero to do with the FTC’s traditional focus or jurisdiction, including matters relating to defamatory statements and the percentage of Chinese-language materials used in the training data. The FTC’s letter includes many pages of demands requiring OpenAI to produce all contractual agreements related to the various partnerships or marketing and licensing agreements the firm has entered into; all research related to “accuracy or reliability” of their products; all documents relating to “testing, surveys, or other efforts to access consumers understanding” of OpenAI products; and so on, and so on, and so on.

Most of this is just make-work nonsense meant to put OpenAI and the rest of the industry on notice that the agency can make their life a living hell. This latest FTC nastygram to a tech company reflects the short-term reality of a lot of forthcoming regulation in the US: We will witness a lot of jawboning and regulation-by-intimidation through implicit threats of undefined action. By a “nastygram,” I mean a letter sent by a policymaker or agency that asks probing questions backed by implicit threats of undefined future regulatory action. It’s a long-standing agency practice, but one that the FTC is using more regularly to influence firm decisions these days, especially in the digital technology sector.

The Federal Communications Commission (FCC) was a real pioneer of the regulation-by-intimidation / nastygram strategy. We called it “regulation by raised eyebrow” in the old days. The most crucial part of the regulation-by-intimidation playbook is the idea of “voluntary concessions,” which are not voluntary at all, of course. But agencies know that if they can extract such concessions without formal regulation, they can often avoid constitutional challenges that would arise from more direct regulatory strikes. The danger exists that this heavy-handed and unaccountable regulatory model is now coming to the world of AI policy.

The Need to Focus on Outputs, Not Process, When Considering AI Policy

To reiterate, the FTC can play a role in policing AI markets, but it is vital that the agency not make the mistake of trying to micromanage the process side of algorithmic systems (i.e., the inputs or mechanisms that go into creating AI applications). While greater AI transparency is on everyone’s minds these days, this is often wrongly equated with a demand that these systems be perfectly “explainable.” That is a fool’s errand because “explainability is easier in theory than reality” when it comes to algorithms, and any attempt to convert this principle into formal regulation will create serious problems for AI markets and ongoing innovation. As I noted in an essay recently on “The Most Important Principle for AI Regulation,” a process-oriented regulatory approach is problematic because:

algorithmic innovation is essentially treated as guilty until proven innocent. A process-oriented regulatory regime in which all the underlying mechanisms are subjected to endless inspection and micromanagement will create endless innovation veto points, politicization, delays and other uncertainties because it will mostly just be a guessing game based on hypothetical worst-case thinking.

Policymakers need to focus on the opposite approach that is focused on algorithmic outcomes:

What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner. A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it. This principle is the key to balancing entrepreneurship and safety for AI.

The other advantage of the FTC aligning its AI focus to this principle is that it is more squarely in line with their statutorily authorized powers, which are supposed to be mostly ex post in nature. Innovation and competition are not supposed to be endlessly micromanaged and second-guessed under the FTC’s authorizing statutes. It’s only when things go wrong that the agency is generally supposed to intervene.

Chicken Little-ism Versus Real-World Evidence

Yet, FTC staff also released a strange blog post recently on how, “Generative AI Raises Competition Concerns,” which mostly stressed hypothetical downsides of it instead of the benefits. All the spooky terms are in the post: “lock-in,” “leveraging,” “platform effects,” “distorting competition,” “foreclose competition through bundling and tying,” etc., etc. The Sky is Falling!

Of course, it really isn’t. Spend a few minutes going through the weekly updates over at Rowan Cheung’s “Rundown AI” and try to keep up with the endless river of breaking developments on this front. In over 30 years of covering emerging technology markets, I have never witnessed a blossoming of innovation, investment, and competitive rivalry on par with what we are witnessing in AI markets globally today. The pace of change and volume of emergent activity is positively dizzying.

According to Apptopia, AI Chatbot apps “have increased 1480%, year-over-year, in the first quarter,” and over 158 were published to the app stores before April 1st. “The public launch of OpenAI’s ChatGPT product has created a consumer-facing artificial intelligence gold rush, including in the realm of mobile apps,” the site says. According to Axios, “More than a third of the hottest enterprise tech startups focus on generative AI, and even more than that are incorporating the technology as a service or feature.” This is up from just 3% of companies in 2021 which were launching AI models and tools. Meanwhile, UBS reports that, “around 2,000 start-ups globally now have AI as a core part of their business model.” And the generative AI ecosystem is evolving and expanding rapidly. The adjoining image from Antler identifies the many firms operating in 10 different sub-sectors.

Source: Antler

Moreover, and perhaps most astonishingly, the FTC’s blog post about generative AI competition never mentions China even though the country had more generative AI start-ups receive funding during the first half of 2023 than the US did. That sure sounds like important competition to me! And the timing of this memo couldn’t have come at a worse time in another way for the agency with Elon Musk just launching his new AI company (xAI) and Meta poised to shake up the generative AI market in a huge way with its coming formal rollout of its 65-billion parameter open source model LLaMA. Again, the intensity of competitive entry, innovation and investment happening right now is absolutely astonishing.

If you want a more robust breakdown of all these competitive developments, make sure to check out the annual State of AI report, the latest Stanford HAI report, or this new McKinsey report on “The Economic Potential of Generative AI.” And as you wade through all that data about the explosive growth of investment and competition, remember to ask yourself if this is a market that the FTC really should be suggesting “raises competition concerns.” Perhaps the better question the agency should be asking itself is how its over-zealous approach to AI could undermine these competitive dynamics and setback ongoing innovation on this front.

With Chair Khan facing serious ethics questions about her aggressive behavior at the agency, it is important that Congress exercise greater oversight of the agency and remind Khan that she must operate within the confines of the law and the limits of her authorizing statutes. The FTC is not the Federal AI Commission and it must not be allowed to behave as such.

Additional Reading



Adam Thierer

Analyst covering the intersection of emerging tech & public policy. Specializes in innovation & tech governance.