The Rising Stakes of Regulating Artificial Intelligence

Evamarie Augustine
Quantum Economics
Published in
6 min readJun 27, 2023

AI needs governance. But how and by who?

Photo by Quantum Economics.

Since the release of ChatGPT from OpenAI last November, all things artificial intelligence (AI) — both good and bad — have captivated the attention of people everywhere. But while we have been interacting with AI for years, including Siri, Alexa, and Apple’s face detection, to name a few, the launch of the generative language model pushed the promises — and fears of the technology into the spotlight. The need to regulate AI is clear, and the question, similarly faced by other emerging technologies, is how and by who?

Tech leaders lobby for regulation

AI is by no means new, with roots that go back much further than virtual assistants. British polymath, Alan Turing, first suggested the use of non-human intelligence back in 1950 in the paper Computing Machinery and Intelligence. While it took many years for the ideas to be deciphered into code, by 1997, reigning world chess champion Gary Kasparov was defeated by AI — in the form of IBM’s Deep Blue computer program.

So what has changed that has policymakers and tech executives suddenly calling for regulation? Both Elon Musk, CEO and product architect of Tesla, Inc., and OpenAI’s CEO, Sam Altman, are lobbying for increased oversight. Altman appeared before a Senate Judiciary subcommittee, speaking about the need for international bodies to help set standards and monitor AI. And in March, Musk, Apple co-founder Steve Wozniak, and others penned an open letter calling for all labs to pause development on AI for six months, during which time the government could work on sensible regulation. And Musk has stated that if we wait, “it may be too late to actually put the regulations in place. The AI may be in control at that point.”

Is Musk looking out for the betterment of society or his own profits? While regulations are created to protect consumers, there are typically consequences. One of the most significant negative repercussions of regulation is stymying innovation and entrepreneurship. In fact, a study conducted at George Mason University back in 2015 showed “more regulated industries experienced fewer new firm births and slower employment growth between 1998 and 2011, and that regulations inhibited employment growth primarily in small firms rather than large firms.”

The costs associated with complying with regulations vary, making it harder for smaller players and new entrants to turn a profit. Meanwhile, more prominent, established players can absorb the legal costs into their already-existing large profit margins and massive legal teams.

The need for compliance

But regulation, particularly in a field such as AI, is necessary. The nascent industry is already full of cases where AI was misguided… or just plain wrong.

AI-powered chatbots have become extremely popular, particularly with health websites. However, when providing health guidance, there needs to be no ambiguity on the advice being given. That is just what happened earlier this month when a chatbot for the National Eating Disorders Association (NEDA) began to give dieting tips. Called “Tessa,” the chatbot replaced the organization’s human-powered national helpline. Being told how to lose weight can be extremely damaging for anyone struggling with an eating disorder.

So what went wrong? Tessa was built, tested and studied as a rule-based chatbot, limited in its responses to what it had been taught. Tessa was operated by the mental health company Cass and a recent upgrade included an enhanced question-and-answer feature, giving Tessa the ability to create responses based on new data, apart from what was originally given. The NEDA has since disabled the chatbot.

When seeing is not believing — the rise of deepfakes

AI has led to an increase in deepfakes — synthetic media digitally manipulated to replace one person’s likeness convincingly with that of another. While these creations have existed for years, they took off in 2018 with the release of fake, highly realistic videos — and have become even more realistic and more popular.

AI, Algorithmic, and Automation Incidents and Controversies or AIAAIC, is a repository that tracks incidents related to the ethical misuse of AI. According to AIAAIC, the number of AI incidents and controversies has increased 26 times since 2012. Notable deepfakes include a video of Ukrainian President Volodymyr Zelenskyy telling his troops to surrender and Elon Musk promoting a cryptocoin. Legislation, at both the state and national level has been proposed for deepfakes. However, barring particular circumstances, the images and videos are currently legal in the United States.

How should AI be regulated?

According to Noah Giansiracusa, tenured associate professor of mathematics and data science at Bentley University and the author of How Algorithms Create and Prevent Fake News, regulation can open up a competitive playing field or ensconce the monopoly powers, protect the public or protect the profits of the corporations.

According to Giansiracusa, the question isn’t whether AI should be regulated or not, but how it should be regulated — and who will be leading that conversation. “When politicians give tech leaders outsized influence, it’s hard not to see a conflict of interest in their actions: their job is to make money and make their companies grow, not to make the world safer.”

But as with any emerging technology, applying existing policies and laws doesn’t always work, and enacting new regulations takes time.

The current state of AI regulation

China became one of the first governments to regulate deepfakes earlier this year. The Cyberspace Administration of China began enforcing the regulation on “deep synthesis” technology, which includes AI-powered image, audio and text-generation software. They were added to the initial guardrails in April when the regulator released a proposal to govern generative AI systems like ChatGPT.

The European Parliament recently drafted the AI Act, which would be one of the first comprehensive regulations for AI. The framework seeks to classify and regulate AI systems by levels of risk posed. Notably, OpenAI has already successfully lobbied to alter the regulation to reduce the company’s regulatory burden, stating that by itself, “GPT-3 is not a high-risk system.” While Italy initially banned ChatGPT due to privacy concerns, access has been restored after OpenAI implemented changes to satisfy regulators’ concerns.

In the United Kingdom, Prime Minister Rishi Sunak wants to enact legislation that requires all AI generated photos and videos to be labeled as such. In its recent white paper, the UK seeks to avoid stifling innovation and instead “take an adaptable approach to regulating AI.”

In the U.S., the White House released the white paper, The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Additionally, a bipartisan bill was introduced by Reps. Ted Lieu (D-Calif.), Anna Eshoo (D-Calif.) and Ken Buck (R-Colo.) that looks for Congress and the White House to assemble a commission including representatives from government, industry, civil society, and the computer science field — to define a comprehensive regulatory strategy.

Even though AI is not new, as with other emerging technologies, policies and regulations are lagging. Governance around AI is needed, however, more regulation can translate into higher barriers to entry for the competition. And as AI leaders advocate for regulation, are they looking out for the public or trying to halt innovation with regulation? Critics claim that effective AI regulations today would lock in the race’s current leaders. AI is evolving at breakneck speed, but policymakers must carefully consider the economic consequences before enacting regulation.

This article was written and edited by humans, although AI may have helped with spell check. This content is for educational purposes only. It does not constitute trading advice.

If you found this content engaging, and have an interest in commissioning content of your own, check out Quantum Economics’ Analysis on Demand service.

--

--