Is AI Regulation Feasible, or a Pipe Dream?

Hassan Taher
3 min readSep 21, 2023

--

In the last two decades, the world has witnessed an explosion in technological advancements, with artificial intelligence (AI) at the forefront. Google, among other tech giants, has played a monumental role, investing billions into deep learning and machine learning projects, reshaping its entire infrastructure to ride the AI wave. With the adoption of AI tools and platforms such as TensorFlow, Google Cloud AI, and AutoML, they’ve primed their infrastructure to handle the ever-growing demands of this technology.

Yet, as AI continues to flourish, concerns about its regulation and potential risks arise. Jimmy Wales, Wikipedia’s founder, likened the idea of regulating AI to “magical thinking”. Drawing on his extensive interactions with global politicians, Wales contends that there’s often a lack of nuanced understanding of technology and its broader implications.

Take for instance the United Nations (UN), which has made attempts to navigate the challenging waters of AI regulation. The decision by UN Secretary General António Guterres to convene a Security Council meeting specifically on AI threats is commendable. From AI-powered cyber-attacks to AI’s potential role in nuclear warfare, the risks are undeniably vast. Guterres’ initiative to establish a “High-Level Advisory Body for Artificial Intelligence” promises to deliver an amalgamation of perspectives, from governmental to academic, to understand the feasible global regulations.

But are such efforts realistic or even viable? AI veteran Pierre Haren, with a rich history at IBM where he was involved with the Watson project, expresses skepticism. He marvels at the advanced capabilities of generative AI, such as ChatGPT, which can generate content and draw high-level analogies. The concern, Haren implies, isn’t just about the AI itself but the inability to achieve universal agreement on its regulations. With non-cooperative nations like North Korea and Iran, expecting unanimous adherence to AI regulations is wishful thinking.

In a parallel endeavor, the UN’s “AI for Good” initiative, founded by physicist Reinhard Scholl, seeks to harness AI for the betterment of humanity. Their objective to tackle challenges from hunger to clean water access showcases the positive side of this powerful tool. Scholl, however, doesn’t shy away from advocating regulation, emphasizing the need for safety similar to industries like automotive manufacturing.

The proposal to model AI regulation after the International Civil Aviation Organisation (ICAO) receives backing from AI enthusiasts like Robert Opp, UN Development Programme’s chief digital officer. While Opp sees the unparalleled benefits of AI in areas like satellite imagery for agriculture, he is acutely aware of its pitfalls, echoing the need for robust AI governance.

But Wales offers a differing view. He cautions against the overreliance on tech giants like Google in the regulation discourse. Highlighting the decentralized nature of AI developments, with countless developers leveraging open-source AI software, Wales believes that regulating such a vast landscape is a Sisyphean task.

The AI landscape is as vast as it is complex. While tech behemoths like Google pave the way, the broader community of developers, startups, and individuals contribute to its mosaic. The conversation on AI regulation isn’t merely about curbing potential threats but ensuring we harness its immense potential without stifling innovation. Whether through a UN-led initiative or industry self-regulation, the journey to effective AI governance promises to be intricate and multifaceted.

--

--

Hassan Taher

Hassan Taher, a noted author and A.I. expert, currently living in Los Angeles, CA | https://www.hassantaherauthor.com/