The Law as Leveler and Enabler
or: Why the AI community should stop worrying and love the law

It has become a well-worn truism to speak of the tension between regulation and innovation. In Silicon Valley, law and lawmakers are often seen as irrelevant at best, and actively harmful at worst. This view is also common in the debate surrounding Artificial Intelligence (AI). However, as I will argue below, the interplay between law and innovation is far more complex than an either/or trade-off. The law is more than so much ‘red tape’ to be overcome; it is in fact crucial in making innovation possible in the first place. Participants in the AI debate therefore need a healthy appreciation of the law’s crucial role in enabling new innovations. If they remain fixated on its constraining effects, they risk handicapping themselves in addressing one of the most profound technical disruptions of our lifetime.

How law enables innovators

At the outset, we can point towards the ‘generic’ enabling functions of law, which are the fundamental mechanisms that allow private actors to transact, and meaningful competition to take place. One useful framework for understanding is provided by in the 2010 ‘The Law as Stimulus: The Role of Law in Fostering Innovative Entrepreneurship’, by Professor Viktor Mayer-Schoenberger of the Oxford Internet Institute. He distinguishes three ways in which private-sector innovators rely on the law to realize their inventions.

Firstly, the law acts a leveler, creating space for market entry and competition where a ‘free market’ (or rather, an unregulated market) would suffocate them. Without anti-trust putting a damper to harmful cartels, monopolizing mergers and other abusive practices, markets risk sliding back into oligopolistic complacency where this little incentive nor opportunity to innovate technically. Due to AI researchers’ reliance on access to large datasets, the market for which is increasingly concentrated, this aspect may be particularly crucial. Various scholars including Maurice Stucke and Allen Grunes have warned against a “rich get richer” effect where only a small number of incumbents are able to accumulate the data necessary to compete in the AI sphere (the wonk term being “data-driven network effects”). As argued by Inge Graef in her recent PhD thesis, competition law and related policies can play an role in unlocking the potential benefits of AI by ensuring access to this data on equal terms. Another way to level the playing field is through effective open data policies, or, in the more radical variant argued by Evgeny Morozov, the creation of national data funds enabling access for all.

Secondly, the law acts a protector, ensuring that entrepreneurs receive adequate compensation for their work through intellectual property (“IP”) rights and other mechanisms. Indeed, IP rights are absolutely central to the business models of most AI ventures. For instance, Google’s search algorithm is one of the most valuable and fiercely-guarded trade secrets in the world. Of course, IP is a double-edged sword: besides encouraging competition, they can also serve to prop up existing monopolies. These rules therefore require certain limitations and exceptions. Another interesting development is the trend towards open-source development and publication of AI technologies. In 2015, for instance, Google recently adopted open source policies for their AI development engines ‘TensorFlow’. Amazon has done the same for ‘DSSTNE’, its product recommendation AI, as have various smaller developers including Torch and Theano. These policies raise interesting questions about the nature of innovation in AI technologies, and the incentives that drive it, which may lead us to revisit traditional assumptions about the relationship between IP and innovation. Nevertheless, few would deny that some level of intellectual property is necessary to avoid market failures. Indeed, though open source projects subvert the conventional use of IP rights, the licenses used to manage these projects (such as in Google’s case the Apache 2 license) still rely on the underlying IP rights for their effect. indeed, Google has .

Finally, the law acts as an enforcer, granting us the mechanisms necessary to enter into stable, reliable market transactions transactions with relatively-unknown third parties. Thanks to the laws of property and contract, and also bankruptcy and corporations, we are no longer forced to trust our business’ partners at their word. Instead, we can rely on the system of private law, with the state as its ultimate guarantor. The legal system, as ‘enforcer’ of private promises and obligations, is what thus enables far more efficient and flexible dealings than earlier systems based exclusively on trust and reputation. Thus, for AI ventures, like all businesses, the law undergirds their capacity to attract investment, to hire staff and to purchase goods and services — as they are now doing at a rapid pace, with the torrent of investment in AI leading Amazon’s Jeff Bezos to declare a ‘Golden Age’ of AI.

These insights speak to the broader point that there is no such thing as a ‘free market’ in the absence of the state and its legal mechanisms. The dynamics of competition and choice that we associate with the private sector are not somehow immutable or ‘natural’, but are the product of a consciously designed legal system. The precise scope of tort liabilities or intellectual property rights, for instance, is not a universal fact set in stone, but rather a matter of choice; a parameter to be tweaked in light of changing societal and technological circumstances. Seen from this perspective, the common framing of law and regulation as a governmental intervention starts to lose meaning. It is far closer to maintenance; maintenance of that glitchy, fragile, yet immensely productive machine known as the market. In legal-academic circles, these arguments have been common knowledge since the legal realist school rose to prominence in the 1930s. But it may be news to the tech leaders of Silicon Valley.

How the law adjusts to technologies

Beyond these relatively ‘generic’ mechanisms of law such as contract and property, one might also point towards specific government interventions — regulation, in the more limited sense — that have contributed to the development of AI-based applications and services. As these examples show, proactive involvement from regulators and lawmakers may be necessary to fully realize a new technology’s full potential. Simply relying on good ol’ private law may lead to missed opportunities.

A prime example is the United States’ broad liability safe harbours for online intermediaries, laid down in Section 512 of the Digital Millennium Copyright Act (for copyright claims) and Section 230 of the Communications Act (for all other claims). Broadly speaking, these laws exempt online services such as Facebook and Dropbox from liability for their users’ actions, allowing them to handle vast quantities of content without incurring excessive compliance costs. As noted by Anupam Chander, these laws have been crucial to the development of the internet as we know it today: “The story of Silicon Valley is not only a story of brilliant programmers in their garages, but also a legal environment specifically shaped to accommodate their creations.”

By the same token, these laws are also crucial to the development of new AI projects such as YouTube’s ContentID programme — an application that helps the platform automate the process of copyright enforcement. This system relies on deep learning methods to identify copyrighted content — a complicated task since it should ideally be able to distinguish between non-infringing, transformative uses such as parody and critique on the one hand, and infringing copies on the other. ContentID was developed largely ‘in the wild’, building on YouTube’s vast troves of user, and then refined iteratively through a combination of automated and human review. YouTube’s safe harbour position is an important part of this story, first of all since this is what enabled YouTube’s business model in the first place, and thus granted them access to this user data, and secondly since it also enables them to experiment with initially inaccurate enforcement measures without risking liability in the process. As the ContentID system improves, it may even become so effective that the copyright safe harbour would no longer even be needed; but YouTube would certainly never be able to reach that point without its current safe harbor protections.

As AI applications expand beyond the internet into the physical world, similar ‘safe harbour’ approaches are now being considered in fields such as robotics, drone flight and self-driving cars. For instance, Matthew Scherer’s recent article in Harvard Journal of Law and Technology argues for a certification regime, whereby robotics applications are offered immunity from various forms of tort liability provided that they are first registered and approved by a government regulator. Ryan Calo, on the other hand, proposes an expansion of strict liability for robots as a means to speed implementation. Whatever the course, such decisive, clear-minded interventions are often preferable to years of protracted litigation and legal uncertainty.

How the law shapes technology

Not only is the law essential for creating competitive, innovative markets; it can also play a crucial role in creating innovation-friendly technical environments. As argued by Lawrence Lessig in his seminal work Code, the design of our technical systems can be as powerful a means of regulation as conventional laws — a fact that takes on particular salience in the tech-dominated economy of the 21st century. Indeed, then, iIt is no surprise that our technical systems can be designed in ways that encourage innovation, or stifle it. Jonathan Zittrain describes this openness to innovation quality as “generativity’ — ‘a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences’. Comparably, Yochai Benkler’s developed his ‘layer model of regulation’ to analyze the openness of the digital ecosystem.

Central to all these theories is the basic insight that our technological landscape is not merely a product of clever engineering decisions, but is also shaped by legal interventions (and decisions not to intervene, one might add). Examples abound, from the FCC’s decision to permit customized phone devices (the famous ‘Carterfone’ case), to interoperability exemptions under trademark law, and unbundling requirements for telecommunications networks. Even if such measures constrain the technical activities of a few, they have the net effect of creating a more inclusive ecosystem for a broader set of actors.

As illustrated in the example of ContentID, one of the key to enabling AI research is to broaden access to relevant data. As Neil Lawrence, head of Machine Learning at Amazon, writes on his blog, ‘progress is driven far more by the availability of data than an improvement in algorithms.’ In this light, another policy that stands to benefit AI researchers is the research exception in the General Data Protection Regulation, which permits the processing of sensitive personal data (such as medical or political data) if done for ‘statistical purposes’. Similarly, the European Commission’s new proposal for a ‘text and data mining’ exception in copyright law. This can allow researchers to collect and (re-)produce third party content for purposes of AI research, paving the way for new forms of data collection. As illustrated in the case of the facial dataset scraped from 40,000 Tinder profiles, scraping can be highly effective as a means to build AI datasets, and yet also raises data privacy concerns.

Of course, legal intervention is not an unmitigated good; it can either open systems up, or shut them down. The challenges lies in identifying those strategic points of intervention where a difference can be made. In the context of AI, it seems likely that issues of interoperability and standardization could be amongst the first to arise. Much may hinge on the advances made through self-regulatory efforts such as the Partnership on AI and AI Now; depending on their outputs, more proactive state involvement could be either a blessing or a curse.

Towards a proactive regulatory agenda

To sum up: the law is an essential tool for enabling innovation. The law provides the basic conditions for market-driven innovation by enabling transactions, leveling competition and protecting investments. The law can also adapted to provide certainty and reduce risk for new technologies. And the law can also shape our technical environment to create innovation-friendly ecosystems and architectures. Understanding these functions is a first step towards an adequate regulatory response. Without being hasty, we should realize that waiting has an opportunity cost of its own. Proactive, timely action can be crucial in setting AI-based industries the right path. To this end, one can already draw on a burgeoning scholarship — including the exciting work of Berkman Klein — outlining reforms for an AI-compatible legal system.

This article was written during my summer internship at the Berkman Klein Center, working with Urs Gasser’s special projects team. Many thanks to Urs, Alba and the Berkterns for their help!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.