What’s the right amount of regulation for AI?

Zac Hudson
4 min readMay 3, 2019

--

Artificial intelligence (AI) has great potential to improve many aspects of our lives, including health, connectedness, and productivity. But that promise can only be realized if AI and the companies that are developing it are allowed sufficient room to experiment and grow.

Serious contemplation of AI-focused regulation is premature. Popular films and the prognostications of some industry titans notwithstanding, artificial intelligence is still very much in its infancy. Consistent with Moore’s law, computing power has exploded in the last few decades. But the algorithmic power and insights at the heart of artificial intelligence and machine learning have developed much more slowly. Regulation, however well intended, may jeopardize or substantially slow the development of socially and economically useful artificial intelligence. And even if regulation does not halt or slow development, it will indisputably impact and shape development in potentially limiting ways. Much like biological systems are extremely sensitive to initial conditions, complex technologies are extremely sensitive to the regulatory conditions in which they are incubated. History is replete with examples of this dynamic. The one that most of us are familiar with is the Red Flag Act, which stifled progress in the auto industry in its early stages. We may thus see very different AI capabilities evolve in different environments over time based on the relative prevailing regulatory regimes.

Regulation at this early stage could also make it significantly harder for companies to enter or operate in the artificial intelligence space. Lest there be any doubt, the absence of omnibus AI or data privacy laws in the United States does not mean that companies trading in AI are free from regulation. AI companies are subject to the same regulatory thicket and set of background legal norms as any other company, as well as the associated compliance and risk avoidance costs. The marginal costs associated with new AI-focused regulation could make market entry or continued operation untenable for many businesses. This is particularly true for small, growing companies with global ambitions facing a potential patchwork of regulation. Moreover, it is already very difficult to convince enterprise customers to purchase AI-driven products or services due to (often amorphous) regulatory concerns. AI-driven regulation could exacerbate many of the pre-existing difficulties that are part and parcel of operating in the AI space, which will inexorably stifle the technology’s contribution to improving things like cancer detection, disaster impact assessment, and economic efficiency.

Another reason to abstain from regulating at this juncture is the certain uncertainty regarding the beneficiaries and victims of such regulation. The creation of unintended winners and losers is an unavoidable consequence of any effort aimed at regulating AI. That is because artificial intelligence comes in many different forms and is used in a wide variety of ways; what is necessary or advisable in terms of regulating AI in one sector may be unwise and ill-advised in another. Put differently, what makes sense for regulating autonomous vehicles likely makes no sense at all for regulating robotic factory workers or customer-service facing AI applications.

Those in favor of immediate AI-impacting regulation often cite the need to guard against the technology’s abuse. That argument, however, gets things exactly backwards. Law-abiding companies that deal in AI already have substantial incentives to use the technology responsibly. The confluence of industry-specific privacy and other regulations, tort law, industry standards, shareholder expectations, and counterparty demands for guarantees and indemnification is powerful in terms of shaping conduct. As a result, there are very real asymmetry risks that could flow from any new AI-focused regulation, particularly in the security context. The above-mentioned combination of factors does not constrain those who are surreptitiously seeking to use AI to do harm, and these bad actors are not going to comply with whatever new AI regulations the government may choose to adopt. The upshot, then, is that to the extent regulation impedes the growth of artificial intelligence, the federal government will be handicapping the development of legitimate users of a growing and critical technology as compared to those who would use that technology to do harm.

To the extent that AI-focused regulation and general privacy laws that substantially impact the operation of AI are unavoidable, legislators and regulators should keep several critical points in mind in shaping the regime to come.

One-size-fits-all approaches will not work. The myriad incarnations and uses of artificial intelligence make only two paths available: one that articulates outcome-related values and principles intended to influence behavior at a high level, and one characterized by nuance and resulting from a substantial expenditure of time and resources. Whatever path is chosen, legislators and regulators should use the unique characteristics of artificial intelligence as their lodestar, not borrow from regulations crafted for different technologies. Anything but a newly-designed, bespoke set of regulations is doomed to fail.

Minimize commercial impact. Every regulation comes with costs. But there are multiple ways in which those costs should be expressly managed in the AI context. Regulators should attempt to harmonize whatever rules they adopt with those that already exist, such as GDPR. They should also ensure that the U.S. regime is unitary, and not a hodgepodge of state-based efforts with conflicting mandates where compliance with one risks noncompliance with another. By the same token, substantial investment must be made upfront to draft clear rules that allow businesses to reliably shape their conduct. Regulatory ambiguity is irretrievably paralyzing.

Set a short timeline for reconsideration. Regulators should fashion any new set of rules such that it requires reconsideration on a predetermined and relatively short timeline. The rules of today are ill-suited to address the facts of tomorrow, and as AI evolves our regulation of it should evolve as well.

Zac Hudson is the General Counsel and Corporate Secretary at Afiniti.

--

--

Zac Hudson

General Counsel at Afiniti | The Intersection of AI and Law | www.afiniti.com