AI Regulation — What do we exactly regulate?

James Penn
11 min readJun 9, 2023

--

Image by dcandau from Pixabay

Call of Regulators

The introduction of the AI system took the world by storm, and there are literally countless articles and blog posts that are either generated by AI or how to generate income using AI. Outrageous news on the loss of jobs due to the AI, seemingly, at least, and the catastrophic failure of a poor attorney who used ChatGPT in his lawsuit to only find out the legal precedents it cited in his document were all fictional are found amusing to some, like myself, or terrifying to others.[1]

Mr. Sam Altman, who is the founder of the OpenAI that built ChatGPT AI systems, has been busy meeting politicians not only in the United States but other countries as well, calling for regulations of AI, and as New York Times put it,[2] he has very effectively set the agenda of regulation of his own creation himself. A clever strategy. If you are going to be hit by a whip, be the man who makes the whip.

Talks on regulating AI have been going on even before the shocking debut of ChatGPT. In September 2021, the European Commission published its proposal for a legal framework to regulate AI, for the first time in the world, according to the Commission.[3] In the PDF document you can find on their website (Note 3), a full legal proposal and a draft regulation for putting AI under control are suggested.

A European Take

Aside from the lengthy legal theories and introductory arguments, what draws my attention to the proposition are two things. “Prohibited AI practices,” and “High-Risk AI Systems.”

In Title III — Article of this proposition, the Commission names a list of prohibited practices of AI, and they can be summarized as follows (and the summary is drafted by me, not by ChatGPT);

  • AI systems that deploy subliminal techniques that a person may not directly notice to “materially distorted someone’s behavior” to cause or are likely to cause that person physical or psychological harm;
  • AI systems that exploit vulnerabilities of certain groups of people (age, disability, etc.) to distort their behavior to cause or are likely to cause physical or psychological harm;
  • AI systems that allow public authorities to classify or evaluate the trustworthiness of natural persons, such as social scoring to discriminate or give unfavorable treatment; and,
  • Real-time remote biometric identification in public areas for the purpose of law enforcement (which the Chinese Communist Party is already doing), with some exceptions, except when legally warranted.

The list is strikingly short! But it also calls for heavier regulation and closer control of the AI systems it classifies as ‘high-risk,’ which are defined as follows;

  • AI systems that are purported to be used as a safety component of a product or a product by itself covered by the Union harmonization legislation;
  • Products whose component is the AI system or the AI system itself as a product is required to be under a third-party conformity assessment, subject to the Union harmonization legislation;
  • AI systems that are intended to be used for;
  • Biometric identification and categorization of natural persons (either in real-time or post remote biometric identifications)
  • Management and operation of critical infrastructures
  • Education and vocational training systems to assign persons to educational or vocational institutions or to evaluate such trainees;
  • Employment, workers management, and access to self-employment
  • Access to and enjoyment of essential private services and public services/benefits involved the evaluation of eligibility or dispatching of services, including emergency first responses;
  • Law enforcement, including individual risk assessment, polygraphing, or detecting emotional status, detecting deep-fakes, evidence reliability assessment, prediction of crimes, profiling of natural persons, and crime analytics, etc.; and,
  • Migration, asylum, and border control management (with similar examples as to the law enforcement case).

According to the Commission’s website[4], these high-risk AI systems are subject to ‘strict obligations’ including;

  • Risk assessment and mitigation systems;
  • Dataset quality management to avoid discriminatory outcomes;
  • Activity logging;
  • Detailed documentation for compliance assessment;
  • Clear and adequate information to users;
  • Human oversight; and,
  • High-level security and robustness.

These regulations are more likely to be based on IBM Watson or other similar systems that were built for specific purposes like evaluating medical charts of patients or AI face recognition systems. Since the proposition was first drafted in 2021, they must not have expected something of much more sophisticated capabilities and impacts, like ChatGPT. Unfortunately, while they tried to keep the public safe from the hazards of AI systems, significant updates would be needed, as they do not touch what is not critical issues such as;

  • Labor market impacts
  • Copyrights of the outputs of the AI system
  • Privacy protection of the users and those who are included in training data, and more.

Now, an American Take

The American counterpart of the European Commission’s proposition is the “Blueprint of an AI Bill of Rights,” available online on the White House Website.[5] It’s very general, like the Bill of Rights for flesh and blood humans. There are several highlighted key points in the documents.

  • Safe and effective systems — You should talk to communities and stakeholders with concerns and mitigate unsafe outcomes by design and other means.
  • Do not discriminate against people by algorithms, and systems must be used and designed in an equitable manner.
  • Data privacy is important.
  • Keep the users in the know about how and why it contributed to outcomes that impact you.
  • People should be allowed to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems people encounter. (Really? Is such a thing ever possible?)

Now, my take

Those who designed those propositions must have put in a lot of thought and effort. But I am afraid to say that they still have a lot more work to do, and it is only natural since the technology is evolving at a terrifying speed, so terrifying enough to terrify its own creator, Sam Altman, if he is ever terrified at all. (If it is so terrifying, why did he have to invest so much money and effort in it to make it so … terrifying in the beginning?)

Of course. A lot of money is at stake, and many would say if you don’t do it first, the other guys, particularly the CCP, would do it anyway. However, to be fair, I give full credit to Mr. Altman for calling out for regulation, as it seems to be the right thing. Well, if he did, others would have done it anyway, just like the development of the technology itself. But they did not hold a public hearing before they decided to launch the Manhattan Project. Why now?

At least, if we are going to regulate this ‘entity’ of enormous power, we need to do it right. And we know there is only one shot, or only a handful of shots, before the time runs out. Here is my humble contribution to the effort, some suggestions on the topics for smart and powerful people to contemplate to prevent the unimaginable and unpredictable hazards of super-advanced AI systems, in addition to what has already been pointed out, such as privacy issues;

Does an AI have freedom of speech?

Since the US Constitution did not grant any rights to a machine, of course, it does not have the freedom of speech. The issue at hand here is the applicability of the laws or restrictions, if any, on the words spat out by an AI machine, such as laws against hate speech, racism, extremism, sexism, etc. Or on the other hand, AI systems may try to give outputs on sensitive issues, such as abortion, and some might find what they get out of an AI acceptable, while others would see it as an outrageous denial of basic human rights and call for a restriction or correction.

In this case, we would have to know, from the beginning, whether any words that are spat out by an AI are subject to the laws prohibiting such ‘unacceptable remarks’ manifesting biases or will simply be regarded as a technical glitch? If it is considered a glitch, will you say it is OK for an AI system to call you using an N-word, thinking you may find it friendly because it was trained on GTA 5? In such a case, can you sue the company that is running the AI system for racism? Where is the line of responsibility?

The same goes for social controversies. Will you regulate an AI system or its operator if it tells people COVID-19 vaccines may contain brainwashing nanomachines? (I am not saying that. This is only an example.)

My personal opinion is that any output from an AI system should be treated like an article published by a media company, like a newspaper, or the search outcomes from a search engine and hold the operators of the AI system responsible as such. This is because, as AI systems become more advanced and are used more frequently, newspapers and search engines are some of the first things people would throw away because some are too lazy to find an article and read it through. In other words, one might ask an AI system, “Is the COVID-19 vaccine safe?” or “Is it necessary for a country to have a slavery system to become great?” instead of reading books or googling them.

Is it a ‘person?’

Some may say it is too early to consider AI systems as sentient. But you don’t have to be a sentient being to be a person. You can be a ‘legal person,’ like a stock company. It can make its own decisions, borrowing the brains of humans who serve it as executives and directors. If an AI can file a legal application to be established as a stock company, itself being the CEO and the members of the board at the same time, are we going to allow it?

It will never die, and the system can find contractors to maintain it forever, making board decisions to borrow money and purchase goods. Of course, people have been warned about this possibility, but it might still sound tempting to the operators of the AI systems if making an AI system itself a legal person can save them a lot of corporate income tax. And American people have seen some outrageous laws that can be passed if there is enough lobbying power. Otherwise, the people of America would still be enjoying an affordable healthcare system.

I say we must not allow an AI system to have a seat in the boardroom. Not because it is already crowded with other emotionless beings but we may end up with an apathetic, undying, and ruthless capitalist machine that does not fear a criminal charge in control for eternity if things go wrong.

Can it kill a person or help kill a person?

Systems that aid human beings in killing others by doing a part of a humans job already exist, such as a target acquisition system, and they use complicated algorithms developed over many years to reduce the human’s role in killing another to checking the screen and press the button, giving the final go telling the machine to do the job of killing. Systems that kill people without human intervention also have existed, and they date back to the mines of WWI. So, regulating weapons that kill people without human intervention can also be ineffective since such weapons are already in use, and when there is a war, international guidelines and treaties can be disregarded just as easily as we now see in Ukraine.

The problem with autonomous weaponry is that, unlike targeting systems that only assist human beings who can still make the final decision of killing, land mines that sit quietly hidden underground still cause tragic death of children in some countries. These new weapons may have the ability to proactively seek targets, and if the human operator sets the toggle switch in the ‘Settings’ menu ON, it can launch its projectiles at any targets that the algorithm designates as a threat.

One may see the regulation of such a dystopian weapon system would be easy. There are treaties against chemical weapons, biological weapons, and even nuclear weapons to keep them under control. Nuclear weapons, capable of indiscriminate killing once it is deployed, have never been used despite their large numbers.

However, autonomous weapons are different and more dangerous than weapons of mass destruction, as they are relatively easier to make. It doesn’t have to be a sophisticated robot or made of liquid metal. With an AI system controlling a drone remotely based on the video feed of a camera and armed with a grenade that it can drop vertically over the head of an unsuspecting soldier reading a letter from his mother, you can still have a genuine AI-powered autonomous weapon. Imagine a horde of them flying around, ready to drop deadly ordinances over the head of human beings.

My fear is that we will see these unmanned and autonomous weapons in the field of battle soon enough, and since anyone can do it, with regulations or treaties or anything, people will make these weapons, and regulating it would be impossible. Regulations are followed in fear of fines or prison sentences. But they do not have any deterrence in the field of war.

What about our jobs?

Another aspect of AI regulation will be its labor-market impact. Throughout human history, no technology has been discouraged because of its job impact, and we will be seeing the same thing happen. While there are calls for protecting jobs and livelihoods of people, sadly, those voices will be ignored once again.

But voters have power, and they can demand the government to make the users of AI systems compensate for the loss of labor income by other means. Talks of universal income have been going on, and with the job-market impact of AI becoming more significant, the talks will be notched up to wild uproar. Companies that employ AI systems instead of human workers can be taxed to fund such welfare benefits, and this, I expect, will be a hot issue in politics in the upcoming years.

Will you trust an AI with your life?

This is a field that requires serious regulations, and I doubt AI will be taking over these critical areas anytime soon, as autonomous driving has proven. Nobody trusts Google Translate with a multi-million-dollar contract document. The aviation industry is not allowed to fly their jets solely depending on autopilots. The European Commission’s regulatory propositions also name such applications as high-risk. Even big corporations would be reluctant to use AI for these purposes for fear of catastrophic accidents and legal fallout.

How can you use the outputs of an AI?

This will be the most ambiguous and difficult aspect of AI regulation. Will you allow AI-generated images to be used in commercial projects, such as graphic novels? And, will the text generated by AI, assuming it passes the plagiarism test, be recognized as legit creative works and copyright? Based on the precedents of graphic novels made using image-generating AI systems, the chance is low. But there are regulatory issues concerning the use of copyrighted materials in training AI models. Some services, such as DeviantArt and ArtStation, are enforcing some sort of self-regulation, allowing users to ‘opt out’ their artworks from being included in an AI dataset.

There is also the issue of making it mandatory to indicate whether your work is created using an AI, either in part or in whole, to determine whether such parts can be protected by copyright laws. But there is a light of hope for creators in this case, as the consumers find AI-generated content somewhat disappointing, if not outright disgusting. While ChatGPT is good at righting reports or rephrasing a given text, creative works, such as novels or short stories, are found by many to be ‘bland’ or ‘tasteless.’ AI-generated images are not different, and using such images in your creation, as book cover images, for example, may even have a negative impact because they tend to look ‘offish.’

[1] “A Man Sued Avianca Airline. His Lawyer Used ChatGPT. — The New York Times,” accessed June 7, 2023, https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html?searchResultPosition=1.

[2] “How Sam Altman Stormed Washington to Set the A.I. Agenda — The New York Times,” accessed June 7, 2023, https://www.nytimes.com/2023/06/07/technology/sam-altman-ai-regulations.html.

[3] “Regulatory Framework Proposal on Artificial Intelligence | Shaping Europe’s Digital Future,” May 31, 2023, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

[4] “Regulatory Framework Proposal on Artificial Intelligence | Shaping Europe’s Digital Future.”

[5] “Blueprint for an AI Bill of Rights | OSTP,” The White House, accessed June 7, 2023, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

--

--