We Need Intelligent Government Policy on AI, not an AI Regulatory Agency

Mark MacCarthy
7 min readAug 2, 2017

--

This first appeared in SIIA’s Digital Discourse Blog.

My recent InfoWorld blog took aim at Elon Musk’s recent call for regulation of AI research. While a deregulation-minded Washington is unlikely to set up a new federal AI agency to oversee AI applications and research, Musk insists that he wants exactly that.

In remarks after his comments to the National Governors Association meeting, Musk clarified that “the process of seeking the insight required to put in place informed rules about the use and development of AI should start now. Musk compared it to the process of establishing other government bodies regulating use of technology in industry, including the FCC and the FAA. “I don’t think anyone wants the FAA to go away,” he said.”

But this is even more worrisome. He is proposing establishing an agency with full regulatory authority over every use of AI. After setting up such an omnibus regulatory structure, then he wants the agency to figure out what it should do!

But this misunderstands the nature of the regulatory issues with AI. A panel composed of industry and academic experts on AI issued its annual report in September 2016 and warned against regulation of AI as such: “…attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

As every AI researcher know, AI is not a single thing. It is hard to classify an advanced mathematical technique as AI or not AI. For research and application purposes, this hardly matters. But if this new agency is going to regulate everything called AI, then we need a more precise notion. And no one has a clue how to go about constructing it.

As one clever technologist tweeted: “Replace “AI” with “matrix multiplication & gradient descent” in the calls for “government regulation of AI” to see just how absurd they are.”

Even if we could figure out what AI is for regulatory and legal purposes, the risks and benefits depend on the domain of use. Think of all the different contexts in which AI is or could be used. No single agency could possibly amass the expertise to regulate in all these areas. The better course would be to ensure that expert agencies look at the challenges that AI poses as it is applied in their area of jurisdiction.

There are issues on how to regulate AI in the context of health care. How should regulators deal with clinical decision software systems? As Bradley Merrill Thompson reports, the FDA, instructed by Congress in the legislation passed late last year, is working on this issue, and is struggling with how to treat software that modifies and improves itself over time.

There are issues on how to regulate AI in the context of credit scores. How should regulators deal with alternative data and new analytic software that takes advantage of the efficiency of machine learning? The Consumer Financial Protection Board is wrestling with this issue right now through a notice and comment proceeding. SIIA filed its own comments in this proceeding.

How should self-driving automobiles be regulated? Should the states and localities set policy or should there be a uniform national standard? Congress is addressing this issue — the House Energy and Commerce Committee just passed a bi-partisan industry-backed bill setting a uniform national standard and sent it on for consideration by the full House.

The point is not that AI is harmless and the government should keep its hands off. The point is that the risks are domain specific and need to be addressed by regulators with expertise in specific domains.

But Musk is worried about something more basic. In his original comments to the governors, he warned that artificial intelligence is a “fundamental risk to the existence of human civilization” justifying “proactive regulation” to make sure that we don’t do something very foolish. So, Musk thinks we need a new Federal Artificial Intelligences Agency to make sure AI researchers don’t destroy humanity.

Why does he think that? In his follow-up comments, the “Tesla CEO also explained a bit more about why he’s so attuned to the potential threat of AI, using the example of DeepMind’s AlphaGo, and its ability to defeat all human opponents many years faster than most expert observers predicted.”

But this fear that AI poses an existential risk is nothing new for him, and seems not to depend on the latest advances in narrow AI applications. A few years ago, he compared AI research to “summoning the demon” where the certainty of “the guy with the pentagram and the holy water” that he can control the demon, “doesn’t work out.”

What’s behind these fears of losing control to machines?

It is not the dramatic improvement in narrow artificial intelligence. Specially designed computing systems are becoming increasingly capable in more and more fields. They can recognize speech, spam and faces; they can detect fraudulent transactions, and make book, music, and educational recommendations; they can pilot airplanes and drive cars; they can select military targets and aid in disease diagnosis; they can beat humans at chess, go and poker. And increasingly they can perform specific jobs that were previously reserved for humans.

None of this poses a threat of losing control. But the development of artificial general intelligence might.

The dream of artificial general intelligence is the development of a single system or linked group of systems that can not only perform a range of disparate tasks, but can reprogram itself autonomously to learn new tasks. From there it is easy to imagine a system that can improve itself in a self-directed way in any field it chooses. Once it has the capacity to do that, it will apply its learning capacity to improving itself and soon far surpass anything that humans have been able to do.

And then we have a control problem: how do we control these superior machines to ensure that they will be safe for humans?

Musk is not alone in sounding an alarm. In 2014, Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek said “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

This is not a new fear. Back in 1965 computer scientist I. J. Goode warned that “…the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

In 1993, mathematician Vernor Vinge warned, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

In 2005, Ray Kurzweil predicted and welcomed the imminent transcendence of human biology in The Singularity Is Near. In contrast, a decade later philosopher Nick Bostrom wondered how a Superintelligence could be prevented from turning against us.

Of course, speculative predictions of human level artificial intelligence have been around since the dawn of the computer age. In 1965, Herbert Simon said, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Marvin Minsky said, “Within a generation … the problem of creating artificial intelligence will substantially be solved.”

Today’s predictions of imminent human-level intelligence and autonomy are similarly speculative, and need to be taken with a grain of salt. There’s simply no evidence that truly self-directing machines are around the corner.

The biggest mistake is to look at the advances in domain-specific AI applications and to conclude that general AI is near.

Even those who are worried about the control problem recognize the difference between narrowly-designed systems that can do one thing better than any human and a general-purpose AI system that can do anything a person can do. For example, Bostrom and Yudlowski say, “…the missing characteristic is generality. Current AI algorithms with human‐equivalent or ‐superior performance are characterized by a deliberately‐programmed competence only in a single, restricted domain. Deep Blue became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery.” A go-playing machine, even Google’s AlphaGo, is no more likely than a chess-playing machine to have a variety of other skills. It is not a machine likely to turn into Skynet.

A second, almost conceptual mistake is to confuse advances in computing power, which will let systems process information and learn much more quickly, with advances in autonomy that will allow systems to choose which goals to pursue and re-program themselves in a self-directed way to achieve these new purposes. There is no indication that machines programmed to achieve one purpose will soon be able to spontaneously decide that they want to do something else. Machines designed to play Go and to learn how to improve their Go skills are not only not able to do other stuff; they also will not suddenly develop an interest in convincing governments to increase their defense budgets or to plant fake news designed to stimulate hostility among nations.

In my InfoWorld article I quoted, Andrew Moore, Dean of Carnegie Mellon’s School of Computer Science, who throws cold water on the potential for self-directed machines, saying, “…no one has any idea how to do that. It’s real science fiction. It’s like asking researchers to start designing a time machine.”

Creating a regulatory agency to supervise all AI research and applications is a solution in search of realistic problem. We shouldn’t be driven by speculative fear into creating such an omnibus regulatory structure. The need for AI in specific domains is urgent — and specific government agencies ought to be active and vigilant in the areas under their jurisdiction. Beyond that, government should be looking for ways to promote AI rather than creating regulatory roadblocks.

--

--

Mark MacCarthy

Senior Fellow and Adjunct Professor, Communication, Culture & Technology Program, Georgetown University