Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission

[originally published on The Technology Liberation Front on September 22, 2014.]

If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn.

Problems with Precautionary Regulation

Let’s begin with Weaver proposed approach to regulating robotics and autonomous systems.

What Weaver seems to ignore — and which I discuss at greater length in my latest book — is that “precautionary” policy-making typically results in technological stasis and lost opportunities for economic and social progress. As I noted in my book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that best-case scenarios will never come about. Wisdom and progress are born from experience, including experiences that involve risk and the possibility of occasional mistakes and failures. As the old adage goes, “nothing ventured, nothing gained.”

More concretely, the problem with “permissioning” innovation is that traditional regulatory policies and systems tend to be overly-rigid, bureaucratic, costly, and slow to adapt to new realities. Precautionary-based policies and regulatory systems focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. As a result, preemptive bans or highly restrictive regulatory prescriptions can limit innovations that yield new and better ways of doing things.

Weaver doesn’t bother addressing these issues. He instead advocates regulating “early and often” without stopping to think through the potential costs of doing so. Yet, all regulation has trade-offs and opportunity costs. Before we rush to adopt rules based on knee-jerk negative reactions to new technology, we should conduct comprehensive benefit-cost analysis of the proposals and think carefully about what alternative approaches exist to address whatever problems we have identified.

Incidentally, Weaver also does not acknowledge the contradiction inherent in his thinking when he says robotic technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” Well, if robotic technology is truly developing “faster than we can legislate it,” then “getting out ahead of it” would be seemingly impossible! Unless, that is, he envisions regulating robotic technologies so stringently as to effectively bring new innovation to a grinding halt (or banning altogether).

To be clear, my criticisms should not be read to suggest that zero regulation is the best option. There are plenty of thorny issues that deserve serious policy consideration and perhaps even some preemptive rules. But how potential harms are addressed matters deeply. We should exhaust all other potential nonregulatory remedies first — education, empowerment, transparency, etc. — before resorting to preemptive controls on new forms of innovation. In other words, ex post (or after the fact) solutions should generally trump ex ante (preemptive) controls.

I’ll say more on this point in the conclusion since my response addresses general failings in Ryan Calo’s Federal Robotics Commission proposal, to which we now turn.

Problems with a Federal Robotics Commission

Moving on to Calo, it is important to clarify what he is proposing because he is careful not to overstate his case in favor of a new agency for robotics. He elaborates as follows:

“The institution I have in mind would not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any initial incarnation. Rather, the agency would advise on issues at all levels — state and federal, domestic and foreign, civil and criminal — that touch upon the unique aspects of robotics and artificial intelligence and the novel human experiences these technologies generate. The alternative, I fear, is that we will continue to address robotics policy questions piecemeal, perhaps indefinitely, with increasingly poor outcomes and slow accrual of knowledge. Meanwhile, other nations that are investing more heavily in robotics and, specifically, in developing a legal and policy infrastructure for emerging technology, will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Here are some of my concerns with Calo’s proposed Federal Robotics Commission.

Will It Really Just Be an Advisory Body?

First, Calo claims he doesn’t want a formal regulatory agency, but something more akin to a super-advisory body. He does, however, sneak in that disclaimer that he doesn’t envision it to be regulatory “at least not in any initial incarnation.” Perhaps, then, he is suggesting that more formal regulatory controls would be in the cards down the road. It remains unclear.

Regardless, I think it is a bit disingenuous to propose the formation of a new governmental body like this and pretend that it will not someday very soon come to possess sweeping regulatory powers over these technologies. Now, you may well feel that that is a good thing. But I fear that Calo is playing a bit of game here by asking the reader to imagine his new creation would merely stick to an advisory role.

Regulatory creep is real. There just aren’t too many examples of agencies being created solely for their advisory expertise and then not also getting into the business of regulating the technology or topic that is included in that agency’s name. And in light of some of Calo’s past writing and advocacy, I can’t help but think he is actually hoping that the agency comes to take on a greater regulatory role over time. Regardless, I think we can bank on that happening and I that there are reasons to worry about it for reasons noted above and which I will elaborate on below.

Incidentally, if Calo is really more interested in furthering just this expert advisory capacity, there are plenty of other entities (including non-governmental bodies) that could play that role. How about the National Science Foundation, for example? Or how about a multi-stakeholder body consisting of many different experts and institutions? I could go on, but you get the point. A single point of action is also a single point of failure. I don’t want just one big robotics bureaucracy making policy or even advising. I’d prefer a more decentralized approach, and one that doesn’t carry a (potential) big regulatory club in its hand.

Public Choice / Regulatory Capture Problems

Second, Calo underestimates the public choice problems of creating a sector-specific or technology-specific agency just for robotics. To his credit, he does admit that, “agencies have their problems, of course. They can be inefficient and are subject to capture by those they regulate or other special interests.” He also notes he has criticized other agencies for various failings. But he does not say anything more on this point.

Let’s be clear. There exists a long and lamentable history of sector-specific regulators being “captured” by the entities they regulate. To read the ugly reality, see my compendium, “Regulatory Capture: What the Experts Have Found.” That piece documents what leading academics of all political stripes have had to say about this problem over the past century. No one ever summarized the nature and gravity of this problem better than the great Alfred Kahn in his masterpiece, The Economics of Regulation: Principles and Institutions (1971):

“When a commission is responsible for the performance of an industry, it is under never completely escapable pressure to protect the health of the companies it regulates, to assure a desirable performance by relying on those monopolistic chosen instruments and its own controls rather than on the unplanned and unplannable forces of competition. [. . . ] Responsible for the continued provision and improvement of service, [the regulatory commission] comes increasingly and understandably to identify the interest of the public with that of the existing companies on whom it must rely to deliver goods.” (pgs. 12, 46)

The history of the Federal Communications Commission (FCC) is highly instructive in this regard and was documented in a 66-page law review article I penned with Brent Skorup entitled, “A History of Cronyism and Capture in the Information Technology Sector,” (Journal of Technology Law & Policy, Vol. 18, 2013). Again, it doesn’t make for pleasant reading. Time and time again, instead of serving the “public interest,” the FCC served private interests. The entire history of video marketplace regulation is one of the most sickening examples to consider since there have almost eight decades worth of case studies of the broadcast industry using regulation as a club to beat back new entry, competition, and innovation. [Skorup and I have another paper discussing that specific history and how to go about reversing it.] This history is important because, in the early days of the Commission, many proponents thought the FCC would be exactly the sort of “expert” independent agency that Calo envisions his Federal Robotics Commission would be. Needless to say, things did not turn out so well.

But the FCC isn’t the only guilty offender in this regard. Go read the history about how airlines so effectively cartelized their industry following World War II with the help of the Civil Aeronautics Board. Thankfully, President Jimmy Carter appointed Alfred Kahn to clean things up in the 1970s. Kahn, a life-long Democrat, came to realize that the problem of capture was so insidious and inescapable that abolition of the agency was the only realistic solution to make sure consumer welfare would improve. As a result, he and various other Democrats in the Carter Administration and in Congress worked together to sunset the agency and its hideously protectionist, anti-consumer policies. (Also, please read this amazing 1973 law review article on “Economic Regulation vs. Competition,” by Mark Green and Ralph Nader if you need even more proof of why this is a such a problem.)

In other words, the problem of regulatory capture is not something one can casually dismiss. The problem is still very real and deserves more consideration before we casually propose creating new agencies, even “advisory” agencies. At a minimum, when proposing new agencies, you need to get serious about what sort of institutional constraints you might consider putting in place to make sure that history does not repeat itself. Because if you don’t, various large, well-heeled, and politically-connected robotics companies could come to capture any new “Federal Robotics Commission” in very short order.

Can We Clean Up Old Messes Before Building More Bureaucracies?

Third, speaking of agencies, if it is the case that the alphabet soup collection of regulatory agencies we already have in place are not capable of handling “robotics policy” right now, can we talk about reforming them (or perhaps even getting rid of a few of them) first? Why must we just pile yet another sector-specific or technology-specific regulator on top of the many that already exist? That’s just a recipe for more red tape and potential regulatory capture. Unless you believe there is value in creating bureaucracy for the sake of creating bureaucracy, there is no excuse for not phasing out agencies that failed in their original mission, or whose mission is now obsolete, for whatever reason. This is a fundamental “good government” issue that politicians and academics of all stripes should agree on.

Calo indirectly addresses this point by noting that “we have agencies devoted to technologies already and it would be odd and anomalous to think we are done creating them.” Curiously, however, he spends no time talking about those agencies or asking whether they have done a good job. Again, the heart of Calo’s argument comes down the assertion that another specialized, technology-specific “expert” agency is needed because there are “novel” issues associated with robotics. Well, if it is true, as Calo suggests, that we have been down this path before (and we have), and if you believe our economy or society has been made better off for it, then you need to prove it. Because the objection to creating another regulatory bureaucracy is not simply based on distaste for Big Government; it comes down to the simple questions: (1) Do these things work; and (2) Is there a better alternative?

This is where Calo’s proposal falls short. There is no effort to prove that technocratic or “scientific” bureaucracies, on net, are worth their expense (to taxpayers) or cost (to society, innovation, etc.) when compared to alternatives. Of course, I suspect this is where Calo and I might part ways regarding what metrics we would use to gauge success. I’ll save that discussion for another day and shift to what I regard as the far more serious deficiency of Calo’s proposal.

Do We Become Global Innovation Leaders Through Bureaucratic Direction?

Fourth, and most importantly, Calo does not offer any evidence to prove his contention that we need a sector-specific or technology-specific agency for robotics in order to develop or maintain America’s competitive edge in this field. Moreover, he does not acknowledge how his proposal might have the exact opposite result. Let me spend some time on this point because this is what I find most problematic about his proposal.

In his latest Brookings essay and his earlier writing about robotics, Calo keeps suggesting that we need a specialized federal agency for robotics to avoid “poor outcomes” due to the lack of “a legal and policy infrastructure for emerging technology.” He even warns us that other countries who are looking into robotics policy and regulation more seriously “will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Well, on that point, I must ask: Did America need a Federal Steam Agency to become a leader in that field? Because unless I missed something in history class, steam power developed fairly rapidly in this country without any centralized bureaucratic direction. Or how about a more recent example: Did America need a Federal Computer Commission or Federal Internet Commission to obtain or maintain a global edge in computing, the Internet, or the Digital Economy?

To the contrary, we took the EXACT OPPOSITE approach. It’s not just that no new agencies were formed to guide the development of computing or the Internet in this country. It’s that our government made a clear policy choice to break with the past by rejecting top-down, command-and-control regulation by unelected bureaucrats in some shadowy Beltway agency.

Incidentally, it was Democrats who accomplished this. While many Republicans today love to crack wise-ass comments about Al Gore and the Internet while simultaneously imagining themselves to be the great defenders of Internet freedom, the reality is that we have the Clinton Administration and one its most liberal members — Ira Magaziner — to thank for the most blessedly “light-touch,” market-oriented innovation policy that the world has ever seen.

What did Magaziner and the Clinton Administration do? They crafted the amazing 1997 Framework for Global Electronic Commerce, a statement of the Administration’s principles and policy objectives toward the Internet and the emerging digital economy. It recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. First, “the private sector should lead. The Internet should develop as a market driven arena not a regulated industry,” the Framework recommended. “Even where collective action is necessary, governments should encourage industry self-regulation and private sector leadership where possible.” Second, “governments should avoid undue restrictions on electronic commerce” and “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.”

I’ve argued elsewhere that the Clinton Administration’s Framework, “remains the most succinct articulation of a pro-freedom, innovation-oriented vision for cyberspace ever penned.” Of course, this followed the Administration’s earlier move to allow the full commercialization of the Internet, which was even more important. The policy disposition they established with these decisions resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. And to reiterate,they did it without any new bureaucracy.

If You Regulate “Robotics,” You End Up Regulating Computing & Networking

Incidentally, I do not see how we could create a new Federal Robotics Commission without it also becoming a de facto Federal Computing Commission. Robotics and the many technologies and industries it already includes — driverless cars, commercial drones, Internet of Things, etc. — is becoming a hot policy topic, and proposals for regulation are already flying. These robotic technologies are developing on top of the building blocks of the Information Revolution: microprocessors, wireless networks, sensors, “big data,” etc.

Thus, I share Cory Doctorow’s skepticism about how one could logically separate “robotics” from these other technologies and sectors for regulatory purposes:

I am skeptical that “robot law” can be effectively separated from software law in general. … For the life of me, I can’t figure out a legal principle that would apply to the robot that wouldn’t be useful for the computer (and vice versa).

In his Brookings paper, Calo responded to Doctorow’s concern as follows:

the difference between a computer and a robot has largely to do with the latter’s embodiment. Robots do not just sense, process, and relay data. Robots are organized to act upon the world physically, or at least directly. This turns out to have strong repercussions at law, and to pose unique challenges to law and to legal institutions that computers and the Internet did not.

I find this fairly unconvincing. Just because robotic technologies have a physical embodiment does not mean their impact on society is all that more profound than computing, the Internet, and digital technologies. Consider all the hand-wringing going on today in cybersecurity circles about how hacking, malware, or various other types of digital attacks could take down entire systems or economies. I’m not saying I buy all that “technopanic” talk (and here are about three dozen of my essays arguing the contrary), but the theoretical ramifications are nonetheless on par with dystopian scenarios about robotics.

The Alternative Approach

Of course, it certainly may be the case that some worst-case scenarios are worth worrying about in both cases — for robotics and computing, that is. Still, is a Federal Robotics Commission or a Federal Computing Commission really the sensible way to address those issues?

To the contrary, this is why we have a Legislative Branch! So many of the problems of our modern era of dysfunctional government are rooted in an unwise delegation of authority to administrative agencies. Far too often, congressional lawmakers delegate broad, ambiguous authority to agencies instead of facing up to the hard issues themselves. This results in waste, bloat, inefficiencies, and an endless passing of the buck.

There may very well be some serious issues raised by robotics and AI that we cannot ignore, and which may even require a little preemptive, precautionary policy. And the same goes for general computing and the Internet. But that is not a good reason to just create new bureaucracies in the hope that some set of mythical technocratic philosopher kings will ride in to save the day with their supposed greater “expertise” about these matters. Either you believe in democracy or you don’t. Running around calling for agencies and unelected bureaucrats to make all the hard choices means that “the people” have even less of a say in these matters.

Moreover, there are many other methods of dealing with robotics and the potential problems robotics might create than through the creation of new bureaucracy. The common law already handles many of the problems that both Calo and Weaver are worried about. To the extent robotic systems are involved in accidents that harm individuals or their property, product liability law will kick in.

On this point, I strongly recommend another new Brookings publication. John Villasenor’s outstanding April white paper, “Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation,” correctly argues that,

“when confronted with new, often complex, questions involving products liability, courts have generally gotten things right. … Products liability law has been highly adaptive to the many new technologies that have emerged in recent decades, and it will be quite capable of adapting to emerging autonomous vehicle technologies as the need arises.”

Thus, instead of trying to micro-manage the development of robotic technologies in an attempt to plan for every hypothetical risk scenario, policymakers should be patient while the common law evolves and liability norms adjust. Traditionally, the common law has dealt with products liability and accident compensation in an evolutionary way through a variety of mechanisms, including strict liability, negligence, design defects law, failure to warn, breach of warranty, and so on. There is no reason to think the common law will not adapt to new technological realities, including robotic technologies. (I address these and other “bottom-up” solutions in my new book.)

In the meantime, let’s exercise some humility and restraint here and avoid heavy-handed precautionary regulatory regimes or the creation of new technocratic bureaucracies. And let’s not forget that many solutions to the problems created by new robotic technologies will develop spontaneously and organically over time as individuals and institutions learn to cope and “muddle through,” as they have many times before.


Additional Reading