What Does It Mean To Regulate AI?
A look at regulatory approaches.
--
If I had a dollar for every time I heard that AI needs regulation, I could purchase Twitter back from Elon.
But seriously, any conversation about AI advancement will undoubtedly mention the need for regulation.
But what does that actually mean?
AI regulation refers to the process of creating and enforcing laws, policies, and guidelines that govern the development and use of AI.
But when you consider how Generative AI works, that is not as easy as it may sound. There are many layers to AI technology.
Consider a user-generated script that runs on plugins that connect to fine-tuned models, that connect to foundation models, that connect to datasets. In this scenario, where does a regulation policy of misinformation in a chatbot get applied?
Do you regulate the use of Large Language Models? The datasets they are trained on? The cloud hosting providers? the GPU providers? the product manufacturers? how about the kid that is running an AutoGPT from his basement?
Nevertheless, while we may not yet fully understand what exactly needs to be regulated, it does make sense to start having those discussions.
It’s important to note that there are two types of regulation.
- Government regulations
- Self-regulation
Let’s look at the current state of AI regulation as it relates to both of these types.
Government Regulations
While the EU appears to be adopting a careful approach with the Artificial Intelligence Act, the United Kingdom has revealed its intentions to regulate AI.
In the U.S., the Federal government via the Commerce Department has started to take steps to create AI safety rules.
The National Institute for Standards and Technology put out an AI Risk Management Framework, and the White House Office of Science and Technology Policy has published a blueprint for an AI Bill of Rights.
There are also some ideas surfacing on the need for a new government body of AI specialists that would monitor and approve…










