California’s SB 1047 Could Stop AI Startups Before they Even Start-Up

Jess Miers
Chamber of Progress
8 min readMay 9, 2024

--

Created with DALL·E 3

The explosion of Generative AI tools has already delivered many benefits to Californians, enhancing creativity and innovation, healthcare, scientific research and development, accessibility, and even the way we work. Yet, some California policymakers appear intent on curbing its growth.

California’s latest and most restrictive AI legislation, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047) creates the “Frontier Model Division” charged with erecting and enforcing significant hurdles for emerging AI developers.

The bill proceeds under an assumption that AI is a societal menace, going so far as to empower the Governor to declare a “a state of emergency relating to artificial intelligence.”

The Bill Favors Existing AI Models over New AI Models, Hindering Competition

SB 1047 simplistically categorizes the intricate world of AI into two types: derivative and non-derivative AI models.

AI models are foundational technologies that support the AI tools and services we use today. For example, large language models (LLMs), are the machine learning models behind the predictive capabilities of generative AI services like ChatGPT and Midjourney.

Similar to how developers utilize the Android and iOS platforms to create a diverse range of smartphone applications, AI developers can use existing models, like Meta’s Large Language Model Meta AI (LLaMA), as bases to develop new AI tools. There are already many open-source AI models to build upon. SB 1047 calls these existing models (and models built upon them) “derivative” AI models.

Indeed, the realm of AI differs from app development in its democratization potential. Developers are refining and building upon existing models and can easily build and deploy their own unique models, independent of existing frameworks. SB 1047 refers to these innovative, independently trained, new models as non-derivative models.

The crux of SB 1047 is this distinction between derivative (existing) and non-derivative (new) AI models. Under the bill, derivative models are exempt from regulation, whereas non-derivative models must navigate a complex array of regulatory compliance measures before developers can start training them.

The Limited Duty Exemption

Non-derivative model developers can seemingly save themselves some of the regulatory headaches if they seek a “limited duty exemption.” For eligibility, developers must integrate myriad guidelines — from federal and state mandates to “industry best practices” specified by entities like the National Institute of Standards and Technology and the newly formed Frontier Model Division. Of course, developers will also soon be burdened with assimilating continuous updates from the California Privacy Protection Agency, which is still crafting its regulations on automated technologies.

The limited duty exemption applies to covered non-derivative models either on par with or less advanced than certain baseline models supposedly lacking hazardous features. Developers must self-certify their eligibility to the Frontier Model Division, risking severe penalties under perjury for missteps. A misreported certification, even if made in good faith, must be rectified within 30 days, and the operation of the model halted.

For the developers who venture to use the limited duty exemption, SB 1047 also opens the door for rival companies to spot and disclose any vulnerabilities upon the model’s release. If competitors uncover such flaws by launch time, the exemption is deemed “unreasonable,” exposing the developer to significant legal jeopardy and aggressive enforcement by the Attorney General.

Given the ambiguous criteria for exemption, the severe penalties for certification errors, and the ever-evolving nature of AI that could render any model non-compliant unexpectedly, it would be manifestly safer for new model developers to adhere to the following pre-development requirements rather than gamble on qualifying for the limited duty exemption.

The Bill Imposes an Unconstitutional Prior Restraint on New AI Models

The pre-development requirements mandated by SB 1047 also raise significant constitutional concerns.

Pre-Development and Launch Requirements

Any developer who cannot obtain a coveted limited duty exemption would face a thicket of permissions before proceeding.

Developers must meet ten exhaustive requirements before even starting their projects. These include adhering to all covered guidelines (as discussed), setting up measures to prevent unauthorized access or release of model weights, and incorporating a self-destruct mechanism for the model — a nod to Asimov and this bill’s roots in sci-fi novels. Additionally, developers must draft a detailed “safety and security protocol” that ensures the model does not possess “hazardous capabilities” and includes preventive measures against “critical harms,” which the bill’s authors only hazily define.

Further complicating matters, developers must meticulously document testing procedures, outcomes, and validations proving the model’s safety — challenging tasks, particularly when development hasn’t yet begun. This documentation involves creating five comprehensive reports on testing procedures, followed by descriptions of how developers will fulfill these testing obligations, including the requirement for third-party verification of any post-training modifications.

Changes to the safety protocol must be promptly reported to the Frontier Model Division within 10 days, along with a vague directive to implement “other measures” necessary to mitigate risks. Ultimately, if any uncertainty about the model’s safety remains, SB 1047 mandates development must halt.

After model development, developers must certify, under penalty of perjury, their compliance with all pre-training requirements to the Frontier Model Division before releasing their model or any AI tools utilizing it. Should developers doubt their ability to fully secure the model, they must refrain from releasing it.

Should they release their models to the public, developers must then annually re-certify the model’s safety, again under penalty of perjury, and monitor their model continuously, reporting any “AI safety incidents” (including third-party misuses of the model) within 72 hours of occurrence.

These requirements expose new model developers to severe penalties and enforcement actions while demanding substantial upfront investment in compliance and monitoring infrastructure — long before determining the viability of their products.

Additionally, the mandate to halt development at the slightest uncertainty introduces a chilling effect on investors who may be hesitant to fund ventures in a regulatory environment where the rules are not only stringent but also prone to change. This is likely to stifle innovation, fortify the position of entrenched players outside the bill’s scope, and significantly dampen the growth of California’s AI sector, driving investment and talent toward more business-friendly locales.

Prior Restraint

The Supreme Court has consistently recognized electronic communications as protected speech under the First Amendment, as established in landmark cases like Reno v. ACLU and Brown v. Entertainment Merchants Association. Furthermore, in Sorrell v. IMS Health, the Court expressly held that “the creation and dissemination of information are speech within the meaning of the First Amendment.” This principle extends to the constitutional right to create and access information, including computer code.

But perhaps most relevant to SB 1047 is Bernstein v. Department of Justice. In this 1999 decision, the Ninth Circuit held that regulatory restrictions on the publication of encryption source code infringed upon First Amendment rights. The case involved Daniel Bernstein, who wished to publish both a research paper and its accompanying source code. However, he was hindered by the Export Administration Regulations (EAR), which he successfully argued violated his rights to free speech, as the court acknowledged that programming language constitutes speech.

Given that SB 1047 similarly enforces restrictions that effectively preclude the development and dissemination of new AI models, it arguably introduces a form of unconstitutional prior restraint on the creation of speech — in this case, code. Coupled with the “significant” financial implications highlighted by the Senate Committee on Appropriations, the bill could soon find itself among the growing list of California legislation challenged on First Amendment grounds.

The Bill Casts Too Wide a Net in Defining Computing Power and Potential Harms

While the bill’s sponsor asserts that SB 1047 targets only the most powerful AI models capable of causing catastrophic harm, the bill’s actual text paints a different picture. The stringent requirements outlined in the bill apply to non-derivative, “covered models” identified as having “hazardous capabilities.”

According to the bill, a “covered model” is defined as any artificial intelligence system trained using more than 10²⁶ integer or floating point operations — a threshold just above the computing power of OpenAI’s current GPT-4 model. This definition implies that even a marginally more powerful model than GPT-4 would be subject to these regulations.

Given technological advancements and trends such as Moore’s Law, it is reasonable to anticipate that forthcoming models like GPT-5 (a derivative model exempt from the bill) could soon meet this criterion. This sets an inescapable scenario where new models, which might inject welcome competition into the frontier model landscape, will face significant regulatory challenges that dominant players do not.

Furthermore, the definition of a “hazardous capability” in the bill extends beyond obvious catastrophic harms — such as chemical, biological, radiological, or nuclear dangers — to include any harm resulting in damages exceeding $500 million. While this specification might seem to narrow the scope, the bill also introduces a catch-all clause under subsection (D) that covers additional threats to public safety and security.

This catch-all, typically subject to a jury’s interpretation, introduces a high degree of legal uncertainty for developers of new models, making the risks associated with launching new AI technologies prohibitively high. This is particularly true as model developers must not only anticipate future harms that could arise as AI technology advances but also guard against potential misuses of the model by third parties, including misuse of applications built atop any covered model.

The Bill Imposes Significant Penalties on AI Developers

The bill empowers the Attorney General to initiate civil actions against parties who violate its provisions, with potential legal consequences that include preventive relief such as injunctions, especially when an AI model poses imminent public safety risks. It also enables monetary and punitive damages, starting at 10% for initial violations and escalating to 30% for subsequent ones. Additionally, the bill allows for the full shutdown of any AI model found in breach of regulations.

It also intensifies legal accountability by expressly expanding the scope of criminal perjury, making it a more significant risk for developers, with the interpretation of such offenses left to jury discretion.

The bill is scheduled to take effect on January 1, 2026, but the Frontier Model Division is not required to release its guidelines until six months later, by July 1, 2026. Given the complex nature of these guidelines and the nascent state of the Frontier Model Division — mirroring the issues faced by the California Privacy Protection Agency in establishing regulations for the California Privacy Rights Act — developers should anticipate significant delays.

Such delays will only further hinder compliance efforts and stifle innovation as the AI industry waits for clear regulatory guidelines. The rapid pace of AI development will only further complicate this situation, as regulations may quickly become outdated, necessitating frequent updates to remain effective and relevant.

SB 1047 Would be a Misstep for California’s AI Future

SB 1047 would create an environment of fear and uncertainty among AI developers. Not only that, the bill significantly disrupts the ethos of the open-source community, traditionally characterized by minimal opportunity costs and the freedom to innovate and share freely.

As California stands on the brink of pioneering AI governance, it is imperative that the future of AI be shaped by knowledge and pragmatism, not by dystopian visions better suited to cinema.

Chamber of Progress (progresschamber.org) is a center-left tech industry association promoting technology’s progressive future. We work to ensure that all people benefit from technological leaps, and that the tech industry operates responsibly and fairly.

Our work is supported by our corporate partners, but our partners do not sit on our board of directors and do not have a vote on or veto over our positions. We do not speak for individual partner companies and remain true to our stated principles even when our partners disagree.

--

--

Jess Miers
Chamber of Progress

Senior Counsel, Legal Advocacy at Chamber of Progress