Ever feel like you hear “we need to regulate AI” everywhere you turn?

Rachana Gupta
Securing AI
Published in
3 min readMar 5, 2024
AI generated

Ever feel like you hear “we need to regulate AI” everywhere you turn? It’s true, figuring out how to handle this powerful tech is a hot topic these days. But what exactly does “regulation” mean?

Think of it like making the rules for the Wild West of AI. We need guidelines to keep things safe and fair, but it’s not as simple as flipping a switch. AI is layered, like a fancy cake. Imagine a user-made script working with different models, all linked to giant datasets. Where do rules for preventing fake news in a chat app even fit in? Do we regulate the big language models, the data they learn from, or everything from the cloud provider to the kid running AI code in their basement?

It’s a complex puzzle, but talking it out is key. There are two main approaches to consider:

1. Government stepping in:

  • The European Union is taking a cautious approach with their AI Act. The UK is also looking at making rules.
  • In the US, things are moving too. The government is setting up safety guidelines, and there are even ideas for a special AI agency, kind of like a safety inspector for new tech.
  • In India, the Ministry of Electronics and Information Technology (MeitY) is also making waves in the AI regulation scene. They recently issued an advisory, raising concerns about potential biases and misuse of AI tools. This, along with their ongoing efforts to develop a comprehensive AI framework, shows that India is taking a proactive approach in shaping the responsible development and deployment of AI.

But here’s the catch: How can one agency possibly check all the different parts of AI? Plus, imagine needing government approval for every new AI project. Not exactly a speedy path.

2. Playing by our own rules:

  • Another option is for the AI industry to regulate itself.
  • Think about doctors, lawyers, and advertisers — they all have groups that set standards and make sure everyone plays fair.

This might seem better, but there’s no single group that speaks for everyone in the vast world of AI. We need something like the “metaverse standards forum” but for AI, where everyone from big companies to individual developers can have a say.

The US government met with tech giants like Google and Microsoft to discuss AI risks. They agreed to work together to keep things safe and beneficial for everyone. This is a great first step, but can companies really put safety above profit when there’s a race to develop the latest AI?

My two cents?

Let’s start with the industry setting its own rules, not the government. Big companies like OpenAI, Google, and others should work together to create a clear and fair framework for responsible AI development. Will it happen? Only time will tell. But one thing’s for sure, if we don’t take action, those government regulations might come knocking after all.

--

--