What’s the Role of the US Government in the Case of an AI Apocalypse?

Jodie L.
b8125-spring2024
Published in
5 min readApr 9, 2024

Nearly one year ago, in March 2023, an open letter called “Pause Giant AI Experiments” was released, calling for “all AI labs to immediately pause for at least 6 months of the training of AI systems more powerful than GPT-4.” It was signed by more than 20,000 signatories, including tech titans like Elon Musk and Steve Wozniak. Two months later, in May 2023, the “Statement on AI Risk of Extinction,” was also released and included the signatures of both Sam Altman and Bill Gates (although notably not Musk).

Of course, by now we know the result. Despite the calls for caution, the industry, perhaps motivated by belief in both the transformational power of AGI and the promise of significant monetization potential, has continued full-steam ahead. OpenAI announced in March of this year that it was planning to release Chat GPT-5, an LLM with a rumored 17.5 trillion parameters or nearly 10x more parameters compared to Chat GPT-4. While I believe that innovation should certainly not be hampered (and least of all by an unwieldy US Acronym to carry out cumbersome policy), I believe that there is wisdom to slowing down and approaching advancements in AI with more thoughtfulness than the current state.

To the credit of the US government, President Biden issued a fairly comprehensive Executive Order on October 30, 2023, detailing new standards for AI. To quickly summarize the key points of the order, the Biden administration calls for:

· Safety and security, particularly around threats of bioengineering and cybersecurity;

· Protection of consumer privacy;

· Protection against fraud and deception by AI-generated content; and

· Guarantee of equity, particularly in areas of lending and hiring.

Perhaps lacking is a robust execution plan. My fear is that the US government will be unable to successfully regulate AI companies, given the current pace of technological development, and it would not be able to implement this ambitious agenda. Another area of concern is that the Administration has largely partnered with mega tech companies like Microsoft and Google for the implementation of these policies, which means that regulatory capture could easily become an outcome. After all, both of these companies stand to gain quite a lot from less regulation, not more.

But does the benefit to slowing down the pace of AI development outweigh the potential productivity gains? from the trajectory of social media, I believe that AI has the potential to exacerbate social harms even more than social media to deleterious effects. I am not referring to the edge case scenarios of a homegrown terrorist learning how to create a biochemical weapon through ChatGPT (yes, certainly bad), but of the more widespread social harm that AI can cause. Again, I point back to the incentives of Big Tech companies involved in AGI, and I will focus on two in particular: Advertising and Data Collection.

Advertising

Two major players involved in AGI are currently Google and Facebook, both of which have advertising models. The remaining two players, Microsoft and Amazon, also have advertising components to their business model in varying degrees. GenAI promises that ads can be personally created to target an audience of one with ease and relatively low cost compared to current copywriting.

However, the problem is that these AI-powered ads may contain factual inaccuracies or image manipulations, all of which is not currently easily detected by humans or other technology. In the least harmful scenario, dropship companies create AI-generated ads of their products worn on AI-humans in AI-rooms and consumers are persuaded to purchase low-quality items at higher prices (TikTok and Instagram is already rife with advertisements of poor-quality dropship items that steal legitimate product endorsement videos from content creators). At the other end of the spectrum, extremists use AI to quickly generate a myriad of ads with hateful texts and sprinkled with half-truths. In both these examples, Big Tech has not shown that they have are effective at (or willing to) controlling these problems, and AI only exacerbates the harm.

Data Collection

Secondly, the building of LLMs incentives both the collection of personal data to train the base model and the use of personal data to target ads to a specific audience. Nathan Sanders and Bruce Schneier, in an article published in the MIT Technology Review (which this essay references), cites that AI chatbots are known to surreptitiously extract personal data in customer service interactions by asking mundane data. In another example, sites like Reddit, Tumblr and Wordpress, are already clamoring to sell personal data and blogposts for training AI models. On the flip side, AI can stoke sensationalism by continuously showing recommending inflamed material to a small audience, as evidenced by Facebook’s role in the 2016 elections.

A Conclusion

Despite the Biden Administration’s step forward in issuing its Executive Order on AI Safety, what I’m arguing is that we are not moving nearly quick enough or with enough force. The Administration is leaning on Big Tech to advise them on policy, while not being wary enough of the incentives that motivate these mega corporations. The AI apocalypse will not be caused by a superhuman computer but by a drone quietly doing the bidding of its masters, and it’s the government’s role to stop this scenario.

On a final note, here are some of my thoughts of what the Administration can do:

· Establish a system for third-party auditors (similar to public accounting firms) with access to the models and training data

· Clearly define violations and set fines for companies that do not abide by Executive Order

· Build internal expertise on AI, with additional dollars allocated to research and governmental development of AGI

Sources:

Image generated by Copilot Designer (powered by Dalle 3).

1. Pause Giant AI Experiments: An Open Letter — Future of Life Institute

2. Statement on AI Risk | CAIS (safe.ai)

3. GPT5: Release Date, AGI Meaning And Expected Features — Dataconomy

4. AI Will Make Extremists More Effective, Too — Inkstick (inkstickmedia.com)

5. Let’s not make the same mistakes with AI that we made with social media | MIT Technology Review

6. Reddit, Tumblr, Wordpress: The deals that will sell your data to train AI models — Vox

7. AI Regulation: Why It’s Already Going Off The Rails (forbes.com)

8. Why AI still needs regulation despite impact (thomsonreuters.com)

--

--