Sam’s Plan to Too-Late Regulate

Sam Altman, the CEO of OpenAI, the corporation most known for ChatGPT, told Congress last Tuesday that he thinks the government should regulate technologies like the ones his company makes and sells, technologies that are being marketed to the world as “artificial intelligence.” The fact that Altman, who was recently quoted in a New York Times profile expressing his intention to use his company to “capture much of the world’s wealth and then redistribute it to the people,” made this statement to lawmakers in the U.S. capitol should raise several red flags for anyone who is concerned about the way that the big tech companies are influencing global social, political and economic life. Whenever a hyper rich corporate executive makes a public statement embracing oversight, we should always ask ourselves “how does this fit into the corporation’s plans for profit-making?”

It seems likely that, if we could compel OpenAI to disclose internal documents relating to this question, we would find that the company has in fact devoted a not insignificant amount of time and money to developing a plan for managing, manipulating, and ultimately rendering moot, most efforts by governments (and not just the U.S. government) to impose any real restraint on the corporation’s activities. But even without such transparency, it is clear from Altman’s remarks at the hearing that the kind of regulation he has in mind is too-late regulation — that is, regulation that is both too late in the technology development process, and too late in the industry adoption timeline to be meaningful.

It would be a mistake to think of too-late regulation as regulation that comes too late to serve any purpose; indeed the too-late-ness is the entire purpose of too-late regulation. For corporations like OpenAI too-late regulation is actually better than no regulation, because it takes the place of and often preempts any actual democratic oversight, and thus ends up protecting the companies it is supposed to hold accountable. In other words, too-late regulation is not failed regulation, it is regulation that serves private rather than public interests.

Altman all but made this explicit in the Senate subcommittee hearing on Tuesday, telling lawmakers he wants to “work with” lawmakers on a regulatory plan, the end goal of which he apparently believes should be some kind of licensing scheme. Like his recent ChatGPT PR campaign filled with dire warnings about the dangers posed by future advances in a technology his company had already unleashed upon the world, the intent of such a licensing scheme seems to be to legitimate OpenAI’s past actions, and to consolidate their present market advantage. A government initiative to develop and distribute “nutritional labels” (as Sen. Blumenthal proposed during Tuesday’s hearing) for algorithmic products all but concedes to tech companies on the questions that actually matter. Questions like whether the massive data-grabs upon which OpenAI’s products depend are and should be legal, or whether the extraordinarily high level of energy consumption necessary for OpenAI to train its algorithmic products is something that the government should continue to permit, or whether proprietary algorithms should be allowed to proliferate across industry in the first place, regardless of the impact on (for example) sick people or children if (for example) hospitals or schools come to be dependant on privately owned digital infrastructure.

Tech company executives want the government to regulate the technology behind their products, rather than regulate the exploitative activities necessary for them to build those products, or the ability of other private and public actors to purchase and deploy their products unconstrained by any consideration of the common good. This is why they have embraced a policy conversation framed around the problem of how to respond to “artificial intelligence.” The question “how should we regulate AI?” takes for granted that “AI” is something particular and delineable, as opposed to an intentionally vague and glamorizing marketing term. Sam Altman would love to keep lawyers on the Hill so busy trying to learn enough computer science to responsibly define “machine learning” for a big “AI” bill that none of them will have time to think about things like price-fixing, or anti-trust, or privacy.

Distraction is also the real motivation behind corporate messaging filled with threats of the non-specific doom that will befall humanity if the products they are building fall into the “wrong hands.” It is a rhetorical strategy designed to get us to think of the technology they are selling as a force of nature that we would be foolish to try to control and from which we can only hope to take some minimal shelter — shelter they will provide to us for a fee. Altman’s hope is that it is a story that will distract us from the fact that the enormous probabilistic engines that OpenAI is building are not only not inevitable, they are not possible without extreme concentrations of wealth, and not maintainable without extreme concentrations of corporate power. It is this concentration of wealth and power that the government must act to prevent, and it is this concentration of wealth and power that Sam Altman wants to co-opt the regulatory process to conserve.

Emily Tucker is the Executive Director of the Center on Privacy & Technology.

--

--

Center on Privacy & Technology
Center on Privacy & Technology at Georgetown Law

The blog of the Center on Privacy & Technology, a think tank at Georgetown Law that focuses on disparate impacts of surveillance policy on marginalized people.