AI Governance: Who Should Wield the Responsibility of Regulating AI?

Chigo Nwogu
UNC Blue Sky Innovations
5 min readFeb 1, 2023

Are governments knowledgeable enough about Artificial Intelligence (AI) to implement proper regulations? If the answer is no, can we trust companies to implement self-regulation that may affect the profitability of their AI tools?

To Make Provisions or Not to Make Provisions?

Let’s put on our imaginary caps and place ourselves in a hypothetical scenario: You’re a farmer in a region of year-round temperate climate and you’ve sustained impressive crop yields since you began farming. So much so that you are a major supplier of agricultural commodities [insert whatever crops come to mind] to prominent economies globally, including your own country.

There’s steady rain in your region all year round, which is great for your crops. However, once every century severe floods occur, capable of destroying your crops and causing instant paralysis to the agricultural supply chain. An event of this magnitude would drastically affect your economy and many others globally. Do you invest in a complex drainage system capable of providing you the security you need if a major flood were to happen? Or do you hedge your bets that you’ll never experience a major flood since it’s unlikely? Knowing how important agricultural prosperity is to your country’s economy, will the government mandate that you must invest in a complex drainage system?

This analogy reflects the dilemma surrounding AI governance and whether the governments and the companies involved in developing AI tools should be proactive in forecasting issues and ethical concerns that can arise from the technology. The same urgency that is applied to innovation should also be directed to the ideation and implementation of regulations for AI. Although oft-seen in a negative light, technology regulations can protect against the unforeseen hazards and misuses that can arise with the widespread adoption of new technologies by companies and institutions.

Why Are Regulations Relevant in the AI Market?

Businesses across several industries are adopting AI, expecting an advantage over competitors. From automating and optimizing workflows to providing personalized customer service to processing mass amounts of data to solve business problems and guide informed decision-making, the intrigue of AI within commerce is at an all-time high. As such, market experts predict the AI market will catapult to upwards of $1 trillion by 2032 from $138.4 billion in 2022.

As businesses seek to integrate AI solutions, therein lies an increasing need to weigh the deleterious effects it can have on consumer trust. It’s alarming that for all the money invested into AI development, there hasn’t been reciprocal investment into AI regulations by world governments. The proliferation of AI tools in the public sphere has contributed to many ethical concerns, such as privacy violations, behavioral manipulations, biased hiring systems, surveillance at work, and swifter dissemination of false or harmful information.

Regarding policy, what measures will be mandated to ensure that AI and its derivative forms are used ethically and safely in a responsible manner that protects individuals’ privacy? And most importantly, who will be responsible for creating this policy?

Self-Regulation by Businesses?

Beena Ammananth, executive director of Deloitte AI Institute, contends that the factors affecting the ethical integrity of AI can vary depending on the function of the AI tool and the organizations creating them. Therefore, she urges organizations to regulate themselves, as governments may not understand the full depth of the AI technologies at different companies and how they vary. The way AI tools interact with customer data, whether they’re deployed and licensed by a third-party, have security and privacy concerns, and whether staff has received requisite training for managing AI tools vary from company to company.

Ammananth suggests organizations must be held accountable for evaluating the AI tools they develop at every stage during the life cycle. Microsoft, Walmart, and Salesforce are examples of businesses taking the initiative to implement self-regulation of AI technologies. Although we may not be able to confirm the breadth of these AI regulation practices in achieving their purported aims, it helps when top companies within the technology sector take the initiative and set the precedence for the regulation of AI technologies.

Government Regulation?

One can argue that governments can be better prepared to regulate AI if the companies developing it are more transparent in sharing the process behind its development. Ioana Petrescu of the Hill outlines an insightful plan for government regulation of AI by educating the public, politicians, and bureaucrats, creating government institutions to draft legislation and conduct studies on future implications of AI use, and increasing the urgency to implement regulations.

Democratic governments have been historically slow in creating policies surrounding new technology, which is not ideal in the case of AI innovation. Since AI is advancing at such a rapid pace, if governments do not act now, it will become excruciatingly difficult to create proper regulations. However, recent trends from democratic governments across the globe may highlight a shift toward faster technological regulation.

Finland has already initiated a program to educate citizens on the benefits and risks of AI. Although creating government institutions can be a pricy endeavor for the U.S., the United Kingdom and Germany are examples of countries with government institutions studying the future implications of AI use. Concerning urgency in implementing AI regulation, Australia’s federal government devised an action plan in 2021 to combat concerns over AI’s privacy, transparency, accountability, and fairness. The change in law was prompted by a scandal where American facial recognition software company Clearview AI parsed through Australians’ social media and built a facial recognition tool. Additionally, the European Union is currently debating an Artificial Intelligence Act.

Which Stance is Right?

Both stances present legitimate points on the advantages and limitations of governments and businesses in regulating AI technology. Although I may have an opinion on this topic, I’ll defer to you to decide whether governments or businesses should have the lion’s share of responsibility for regulating AI technology. Nonetheless, I think we can come to a consensus that preventative measures should be in place to account for unforeseen complications that might arise from AI’s continued advancement.

To be clear, this was not written in advocacy for a pessimistic outlook on AI — in fact, I would consider myself one of the biggest proponents of AI and how it can automate tasks, improve technology accessibility, and make life much easier. However, I recommend that we as a society practice careful restraint when developing new groundbreaking AI technologies to safeguard against potential unforeseen complications.

--

--