The Path to AI Regulation: It Might be Time to Build Skynet

Seth Zucker
b8125-fall2023
Published in
4 min readNov 16, 2023

Over the past 60 years, tech companies have continued to build upon the ideas first espoused in the 1940s by Logician Walter Pitts and neuroscientist Warren McCulloch in their model of a “neural network” to create algorithms that mimic human thought processes. Fast forward to 2018, OpenAI released GPT, a type of large language model (LLM) capable of generating nearly human-like content, and the first of many Artificial Intelligence (AI) models to be released. Then, in November 2022, OpenAI released GPT-3, a chat-based interface to its LLM for public use. Less than a year later, it has become difficult to imagine a world before these models were in our lives given their use in generating content, images, video recognition, and decryption. But along with the exuberance, there remains the growing list of dangers these AIs pose. From hallucinating legal cases, creation of toxic content like deep fake videos and photos, to malicious use of decryption technology, the ethical and social risks associated with these models continue to grow. Given the fear among the public, governments, and even industry experts, there has been a surge of debate surrounding the role of regulation in governing AI’s use and future application. But if AI needs to be regulated, the question remains: by who?

Numerous countries have begun to take steps to implement regulations and frameworks for this industry. In the US, President Biden issued an executive order focused on making AI safer. More recently, the first AI Safety Summit was held in the EU, drawing numerous world leaders and industry experts. The summit was meant to establish a shared understanding of the opportunities and risks posed by AI and resulted in the development of a “declaration” which sets out the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” The issue, however, with government regulation stems from their historical track record, or lack thereof, in regulating Big Tech. With the growing number of known dangers, and even larger number of unknown unknown variables, how can the public expect governments who are notoriously slow to action to continue to update regulations as new developments arise. Even removing speed from the discussion, how can we as the public trust our governments to regulate these technologies when they might not even understand them themselves given the sophistication of these LLM’s (remember, Mark Zuckerberg had to explain to a sitting US Senator that Facebook makes money through ads). There also exists the argument that given the relatively statutory nature of governmental regulation, this would result in the significant hinderance to future developments while also allowing for potential innovation from nations which do not impose the same restrictions on their companies. As former Google Executive Chairman and current AI evangelist Eric Schmidt has warned, “There’s no one in government who can get [AI oversight] right.”

Thus, we turn to internal regulation, and already, there have been calls from many at the forefront of this field. In May, Sam Altman, the CEO of OpenAI, told a senate committee of the need for “a new agency that licenses any effort above a certain scale of capabilities.” As far back as March 2023, an industry letter was signed by the likes of Elon Musk and thousands of others calling for a six month pause on AI. Unfortunately, further developments have proven that many of these calls to action may have been nothing more than value-signaling at best, or deception at worst. While Musk was publicly calling for a pause, he was privately building out his own AI based on X data. Nine days after Altman’s testimony, he spoke out against the, at the time, European Union’s pending regulation stating, “We will try to comply, but if we can’t comply, we will cease operating [in Europe].” Considering the results of the “self-regulation” employed by tech firms over the past 15 years, it would be hard to trust that industry insiders wouldn’t take the same approach that was used when handling social media privacy, user manipulation, and misinformation campaigns.

So where does that leave the world? Governments are too slow and lack the appropriate knowledge, and it’s unclear if industry experts can be trusted to do the job themselves. This leads to a potential option that may now feasible given recent advancements — Skynet, or at the very least an AI employed to regulate AI. This system would involve governments and industry experts working together to create an AI system capable of monitoring, analyzing, and flagging other AI in real time. By employing an AI, this would mitigate the issue of AIs continuously changing nature given its ability to analyze large volumes of data and identify patterns far more efficiently than any human counterpart. Predictive models could be used to forecast future regulatory issues based on burgeoning trends, and regulators could then act quickly. This system would, however, require significant human oversight to handle potentially more complex situations where human judgement and ethics remain necessary. This would not be a simple endeavor and would most certainly require the cooperation and coordination of world governments along with industry experts to set a proper framework, one for which the declaration from the AI Safety Summit could prove as a clear baseline.

While the conversation around regulation in the field of AI continues to develop, we need to make sure that our own inherent flaws, biases, and interests do not get in the way of what is in the best interest of all of humanity and hope we don’t accidentally create a real Skynet in the process.

--

--