How to Regulate Dangerous Artificial Intelligence

Carlos E. Perez
Intuition Machine
Published in
6 min readAug 6, 2017
Credit: https://unsplash.com/@agkdesign

The response to Musk’s comments about the need for Artificial Intelligence (AI) regulation by experts in has been almost like a knee-jerk reaction. The reaction has been prevalently along the lines of not being able to identify areas that require regulation. I suspect that most AI researchers have not really made a serious effort with regards to the big picture.

I am deliberately avoid discussing here the “Why?” of AI regulation. Rather I will discuss the questions of “What?” and “How?”. Please, in your comments, avoid short cutting the discussion by asking why AI regulation is needed. You can find opinions on that elsewhere. In fact, if you are able to corner Elon Musk, then perhaps he get give you a much better explanation.

Allow me to first broaden the scope of this into the concept of automation. That is, let’s look at the continuum of automation, realize that AI is a more capable form of automation and then explore existing regulation that applies to the use of automation.

There is a wide continuum of automation:

Level 0 ( Manual Process ) — Zero Automation.

Level 1 (Attended Process) — Users are aware of the initiation and completion of the performance of each automated task. The user may undo a task in the event of incorrect execution. Users are however responsible for the correct sequencing of tasks.

Level 2 (Attended Multiple Processes) -Users are aware of the initiation and completion of a composite of tasks. The user however is not responsible for the correct sequencing of tasks. An example will be the booking of a hotel, car and flight. The exact ordering of the booking may not be a concern of the user. Failure of the performance of this task may however require more extensive manual remedial actions. An unfortunate example of a failed remedial action is the re-accommodation of United Airlines’ paying customer.

Level 3 (Unattended Process)- Users are only notified in exceptional situations and are required to do the work in these conditions. An example of this is in systems that continuously monitor security of a network. Practitioners take action depending on the severity of the event.

Level 4 (Intelligent Process) — Users are responsible for defining the end goals of automation, however all aspects of the execution of the process as well as the handling of in-flight exceptional conditions are handled by the automation. The automation is capable of performing appropriate compensating action in events of in-flight failure. The user however is still responsible for identifying the specific context in which automation can be safely applied to.

Level 5 (Fully Automated Process) -This is a final and future state where human involvement in the processes is not required. This of course may not be the final level because it does not assume that the process is capable of optimizing itself to make improvements.

Level 6 (Self Optimizing Process) -This is an automation that requires no human involvement and is also capable of improving itself over time.

These levels are inspired by the Society of Automotive Engineering (SAE J3016) standard. I believe “automotive” here means cars and not automation in the computer sense. Level 6 however goes one step beyond the SAE requirements. We can think of this as a level required in certain high performance competitive environments such as Robocar races and stock trading.

Let’s now examine various laws in different domains and relate them to the levels prescribed above. I think it will safe to assume that laws that apply to a lower level also apply to every higher level.

Here are some laws that are in the books:

Robocalling — Enacted by the FTC in 2009. Prohibits prerecorded telemarketing calls, unless the marketer has the consumer’s prior written authorization to make a call. Further FCC regulations enacted in 2016 on Robocalls for political campaigns. Level 1 automation.

Spam — SPAM Act of 2003 basically says “e-mails should not mislead recipients over the source or content of them, and that all recipients of such emails have a right to decline them.” Level 2 automation.

Viruses , Trojan Horses and Worms — 1990 Computer Misuse Act which covers unauthorized access and “unauthorised modification of computer material”. Level 3 automation.

Programmed Trading — October 19, 1987 also known as “Black Monday”. New rules required exchanges to have “trading curbs” or “circuit breakers” that allow exchanges to halt trading in instances of high volatility. Level 3 automation.

High Frequency Trading — CFTC is proposing regulations with regards to automated trading such as AT tactics such as “spoofing,” “flash trading,” and “quote stuffing”. HFT involves leveraging computers to exploit market inefficiencies that arise from delay and participant response times. In general, financial organizations make a living by hacking our financial system to find areas of inefficiency and loopholes where they can legally rob market participants. Level 2 and Level 3 automation.

Drone Regulation — FAA Regulations that are now in affect. Specific regulations that are general enough to apply to other automation: “Drones have to remain in visual line of sight of the pilot”. In short, strictly Level 2 automation for drones!

Regulation of Genetic Engineering — Genetic engineering is limited on animals to a few use cases. Mostly legal for experiments and the development of derivative products, however it is illegal to let these genetic engineered animals into the wild! Level 6 automation.

Biological Weapons — Act of 1989. The act makes it illegal to buy, sell or manufacture biological agents for use as a weapon. This is level 6 automation. Level 2 weaponized automation are already used in theater in the occasional “drone strike” in the middle-east. Note that cruise missiles are Level 4 automated weapons.

Nuclear Non-Proliferation Treaty — Two important aspects that may be relevant with AI, “not in any way to assist, encourage, or induce” a non-nuclear weapon state to acquire nuclear weapons (Article I)” and “right of all Parties to develop nuclear energy for peaceful purposes and to benefit from international cooperation in this area (Article IV)”. In short, if you get the technology first, you can demand laws so you can keep your monopoly!

This is just a sampling of the laws that are out there that involve the regulation of either automation or dangerous technologies. What can we thus now generalize about these existing laws?

  1. Automation requires permission to interact with humans.
  2. Automation cannot mislead humans as to its identity.
  3. Automation shall not make unauthorized modification of information.
  4. Automation shall be automatically shut down in anomalous situations.
  5. Automation shall not have restrictions on the methods it uses to deceive other participants.
  6. Automation shall always be attended by a human.
  7. Level 6 automation shall never be let out into the wild. Level 6 automation shall be only available for experimentation and creation of non-level 6 automation.
  8. It should be illegal to buy, sell or manufacture weaponized level 6 automation.
  9. Whoever gets to level 6 automation first decides for everyone else what the rules are. Otherwise known as the “Golden Rule for AI”, that is, who owns the Gold, therefore rules!

A bit of a disclaimer, I wrote this in an early Sunday morning. It’s just a first cut of some ideas, but I am certainly sure that others have better ideas. My motivation for this article is to jump start conversation on this eventual pressing topic.

Strategy for Disruptive Artificial Intelligence: https://gumroad.com/products/WRbUs

--

--