Can artificial intelligence be regulated? Perhaps not.
We are seeing several government initiatives to regulate artificial intelligence (AI) in recent months. The European Commission issued a groundbreaking proposal for the regulation of AI in April this year. Last November, the White House issued a memorandum providing “Guidance for Regulation of Artificial Intelligence Applications.” The OECD’s AI Policy Observatory lists over 600 policy initiatives from more than 60 countries.
The intent to regulate the development of AI to ensure adherence to ethical principles and promote algorithmic accountability and transparency is laudable. But is it practical to regulate AI? Perhaps not.
Rapid advances in AI
Artificial Intelligence is a rapidly evolving field. This fast pace can be illustrated by the development of AI-based predictive text models. In November 2019, OpenAI, a San Francisco-based AI research organization, released GPT2 (Generative Pre-trained Transformer 2), an open-source AI capable of generating text almost indistinguishable from humans. The GPT2 language model was trained using 1.5 billion parameters. GPT3 was introduced six months later using 175 billion training parameters — a more than a hundred-fold increase. The Beijing Academy of Artificial Intelligence (BAAI) released its “Wu Dao” AI system in June this year. The system was trained using 1.75 trillion parameters, more than a thousand times larger than GPT2.
The speed of technological development far exceeds the pace at which human institutions operate. Carl Benz applied for a patent for a “vehicle powered by a gas engine” in 1886. The first car built by the Ford Motor Company was sold in 1903. However, it took more than half a century for seat belt laws to be enacted. The National Traffic and Motor Vehicle Safety Act in the US became effective only in 1966. Unlike the fable, the regulatory tortoise cannot keep up with the technology hare.
In the case of AI, technology is evolving at an accelerating pace. There are concerns about the imminent emergence of Superintelligence, where machines have more intelligence than the combined intelligence of the entire human race. This potential danger has led people like Elon Musk, Stephen Hawking, and Bill Gates to sound the alarm on AI.
So far, AI has been confined mainly to narrow applications like Google search, autonomous cars, or voice assistants like Alexa, Cortana, and Siri. However, we are fast moving towards AI capabilities that are getting broader. As each new algorithm is connected, the complexity of the system grows exponentially. An AI that generates the virtual twin of a city using drone footage is one thing. But an AI that combines the virtual twin with real-time data from millions of IoT devices and sensors across the city represents a very different level of complexity.
Previously algorithms were developed using centralized data sources. Recent advances in swarm learning and swarm intelligence distribute algorithmic learning to edge devices and computing systems. Instead of transferring data, only the learning from distributed systems is transferred for the centralized development of algorithms. If an algorithm demonstrates bias, isolating the edge device introducing such bias into the system represents another level of complexity. And if AI is used to monitor and check other AIs, it only means more complexity in the system, not less.
In a recent paper titled ‘Superintelligence Cannot be Contained’ in the Journal of Artificial Intelligence Research, contributors from several academic institutions point out that it is theoretically impossible to contain a superintelligence. In his book Superintelligence, Nick Bostrom, the founding Director of the Future of Humanity Institute at Oxford, has made a somewhat similar point.
There is no way to guarantee the absolute security and safety of billions of computer programs running on globally connected machines. Arthur C. Clarke wrote a short story (Dial F for Frankenstein) in 1964, warning us that once all computers on the planet were connected, they could seize control of our society with disastrous consequences. This apocalyptic finale may not be as improbable as it might seem.
The European Union’s regulatory framework
The European Commission’s regulation on AI prohibits certain practices associated with AI use, establishes safeguards for high-risk applications, and imposes severe penalties for noncompliance.
As per the regulation, Artificial intelligence systems will be prohibited if they pose a clear threat to individuals’ safety or livelihoods or violate EU values and fundamental rights. A violation of the prohibited AI practices carries a maximum administrative fine of €30 million or 6% of the infringer’s global annual revenue in the preceding fiscal year, whichever is greater.
The regulation stipulates high-risk AI systems to include, for example, the management and operation of critical infrastructure, essential private and public services, administration of justice, and democratic processes, among others. The regulations impose various obligations for high-risk AI systems. These include establishing a risk management system and eliminating or mitigating risks through design, development, training, and testing. They also require measures to ensure appropriate human oversight.
All high-risk AI systems will have to be registered in a publicly accessible EU database with data provided by AI system providers.
When AI systems are not classified as high-risk, transparency requirements apply. For instance, providers must ensure that users are aware they are interacting with machines to decide whether to proceed.
Non-high-risk AI systems (e.g., AI-enabled video games or email spam filters) are exempt from the Regulation because they pose little or no risk to citizens’ rights and safety. The European Commission believes that most AI applications will fall in this category.
The Commission intends to establish a European Artificial Intelligence Board to ensure uniform application of the Regulation throughout the EU and facilitate its implementation. The European Commission would chair the Board, consisting of one representative from each national supervisory authority and the European Data Protection Board (EDPB). Additionally, voluntary codes of conduct and regulatory sandboxes are proposed.
The practical reality
Let us take a closer look at the feasibility of regulating AI within the framework of the EU regulations. Article 5 stipulates several AI practices that are prohibited. The very first such practice includes “the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm.”
Think of a hostile country that develops algorithms to identify, target, and persuade specific individuals on social media not to take COVID vaccines. Such efforts can cause them physical harm. It will be difficult, if not impossible, to link social media posts to AI algorithms that were employed to target the individuals.
Let us take another example. AI that allows for real-time remote biometric identification in publicly accessible spaces for law enforcement is prohibited. The exception is if it is used for targeted search in the event of certain crimes (with maximum detention exceeding three years), finding missing children, or preventing terrorist attacks. It will most likely be necessary to monitor public spaces continuously to identify criminal suspects, thus rendering the provision meaningless.
For high-risk AI systems, risk management measures and testing procedures are mandated. However, given the complexity of algorithms, it is nearly impossible to guarantee their safety and compliance. It may be impossible to assess algorithmic risks without replicating every minute detail — a task that might be extremely challenging.
Weaponization of AI
In addition, we do not foresee a world where states will refrain from weaponizing AI for military purposes. Regulation cannot prevent such weaponization in the absence of internationally binding agreements. The United Nations has been working on curbs on the development of AI-enabled autonomous weapon systems. However, the country positions are not likely to result in a ban.
The 2021 Report of the US National Security Commission on Artificial Intelligence states that “While it is neither feasible nor currently in the interest of the United States to pursue a global prohibition of AI-enabled and autonomous weapon systems, the global unchecked use of such systems could increase risks of unintended conflict escalation and crisis instability.”
China initially called for a ban on fully autonomous weapons in April 2018 but later clarified that its call was for use only, not development and production. Russia asserts that while it does not believe lethal autonomous weapons will become a reality soon, it is researching, developing, and investing in autonomous weapon systems and has prioritized military investments in artificial intelligence and robotics.
The world is in a frantic race to weaponize AI. The regulation and control of AI to prevent dangerous consequences is, therefore, a mere pipe dream.