The Current State of AI Regulation

Ryscales
b8125-spring2024
Published in
3 min readApr 9, 2024

Recently described as our generation’s space race by Sequoia Capital, generative AI has captured the attention and caused fear and optimism across businesses, academics, and individuals across the world. ChatGPT’s rise sparked a flow of funding, new companies and excitement in generative AI. Generative AI has already had a more successful start than the SaaS market, which took years to reach >$1 billion in revenue (something generative AI accomplished in months). Large technology companies have poured billions into start-ups or are developing their own models to capitalize on this potentially massive, transformational innovation. However, with this success, AI excitement is turning to hysteria with massive fundraises, GPU purchases, talent wars, and ominous warnings from business leaders. Recently, top business executives warned that our “social order could collapse” or that “AI will be smarter than the smartest human next year”. Generative AI could disrupt millions of jobs and industries, and people and companies should be worried if this growth is left unrestrained.

Now is the time to adopt new laws and regulations to restrain and manage generative AI while continuing to support safe and regulated growth in this new technology. These new laws and regulations should be a global effort to protect workers, prevent fraud, and ensure safe practices. The EU’s Artificial Intelligence Act represents a step in the right direction. This act established a common regulatory and legal framework for AI development across a broad range of sectors within the EU. The act categorizes AI applications based on their risk of causing harm, so the EU can better regulate higher risk applications. For example, AI systems used in health, education, recruitment, critical infrastructure, law enforcement or justice are deemed high-risk while AI systems used for video games or spam filters are deemed minimal risk. This will ensure that less harmful applications can continue to innovate largely unhindered while riskier applications will face more scrutiny. However, this act does not go far enough and does not include countries outside the EU. Conformity assessments to the new rules can be self-assessments without third-party reviews. This opens the potential for fraud and insufficient controls that can be abused by companies trying to get an edge in the competitive AI space. Concerns have also been raised about AI applications that are not specifically classified or are vague. Deepfakes used to spread political misinformation or non-consensual imagery are not necessarily defined as high-risk applications.

Although the EU Artificial Intelligence Act is a solid starting point, large countries like China, India, and the US need to step-up to help establish clear regulatory and legal constraints at a global level. The majority of leading AI companies are based in the U.S., but the U.S. has a complex patchwork of regulation at the state level. More than a dozen U.S. states have passed laws covering AI use, but there has been no serious AI legislation at the national level. Additionally, India and China both have large and growing consumer bases but lack clear and substantive AI regulations. These countries need to lead the charge on AI regulation before AI grows too big and complex to regulate effectively.

The hope is that countries can follow through on their pledges from the two-day AI summit in England last year that saw over 24 countries pledge closer cooperation in evaluating the risks posed by AI systems and potential legal frameworks to govern their deployment. This pledge was the first major international statement to acknowledge the existential risks of powerful new AI models and the need to work together. It was also important that China was involved in the conference as many feared that China and the U.S. would use AI as part of an arms race between the two countries. As U.K.’s Prime Minister Rishi Sunak noted during the conference, “there can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers” and it was powerful to see many of the AI leading countries in attendance at the conference. Future plans to hold AI safety summits in South Korea and France this year will test the willingness of these countries to put in place tangible regulations and laws to govern AI development.

As AI rapidly evolves, countries and policy makers will need to quickly come together and enact real regulations and plans to combat the negative consequences of new, powerful AI models. A patchwork of different regulations and laws will do little to prevent abuse and the existential risks posed by AI. As world leaders and businesses sound the alarm on the impact of generative AI, we can only hope that regulatory bodies take this seriously over the next months and years.

--

--