AI and the Law: Setting the Stage

Urs Gasser
Berkman Klein Center Collection
12 min readJun 26, 2017

While there is reasonable hope that superhuman killer robots won’t catch us anytime soon, narrower types of AI-based technologies have started changing our daily lives: AI applications are rolled out at an accelerated pace in schools, homes, and hospitals, with digital leaders such as high tech, telecom, and financial services among the early adopters. AI promises enormous benefits for the social good and can improve human well-being, safety, and productivity, as anecdotal evidence suggests. But it also poses significant risks for workers, developers, firms, and governments alike, and we as a society are only beginning to understand the ethical, legal, and regulatory challenges associated with AI, as well as develop appropriate governance models and responses.

The Revolution by Fonytas, licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

Having the privilege to contribute to some of the conversations and initiatives in this thematic context, I plan to share a series of observations, reflections, and points of view over the course of the summer with a focus on the governance of AI. In this opening post, I share some initial thoughts regarding the role of law in the age of AI. Guiding themes and questions I hope to explore, here and over time, include the following: What can we expect from the legal system as we deal with both the risks and benefits of AI-based applications? How can (and should) the law approach the multi-faceted AI phenomenon? How can we prioritize among the many emerging legal and regulatory issues, and what tools are available in the toolbox of lawmakers and regulators? How might the law deal with the (potentially distributed) nature of AI applications? More fundamentally, what is the relevance of a law vis-à-vis a powerful technology such as AI? What can we learn from past cycles of technological innovation as we approach these questions? How does law interact with other forms of governance? How important is the role of law in a time where AI starts to embrace the law itself? How can we build a learning legal system and measure progress over time?

I hope this Medium series serves as a starting point for a lively debate across disciplines, boundaries, and geographies. To be sure, what I am going to share in these articles is very much in beta and subject to revision and new insight, and I’m looking forward to hearing and learning from all of you. Let’s begin with some initial observations.

Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. These techniques are used in a broad range of applications, spanning areas as diverse as health diagnostics, educational tutoring, autonomous driving, or sentencing in the criminal justice context, to name just a few areas of great societal importance. From a legal and regulatory perspective, the term AI is often used to describe a quality that cuts across some of these applications: the degree of autonomy of such systems that impact human behavior and evolve dynamically in ways that are at times even surprising to their developers. Either way, whether using a more technical or phenomenological definition, the justification and timing of any legal or regulatory intervention as well as the selection of governance instruments will require a careful contextual analysis in order to be technically workable and avoid both overgeneralization as well as unintended consequences.

Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. As a growing number of increasingly impactful AI technologies make their way out of research labs and turn into industry applications, legal and regulatory systems will be confronted with a multitude of issues of different levels of complexity that need to be addressed. Both lawmakers and regulators as well as other actors will be affected by the pressure that AI-based applications place on the legal system (here as a response system), including courts, law enforcement, and lawyers, which highlights the importance of knowledge transfer and education (more on this point below). Given the (relative) speed of development, scale, and potential impact of AI development and deployment, lawmakers and regulators will have to prioritize among the issues to be addressed in order to ensure the quality of legal processes and outcomes — and to avoid unintended consequences of interventions. Trending issues that seem to have a relatively high priority include questions around bias and discrimination of AI-based applications, security vulnerabilities, privacy implications of such highly interconnected systems, conceptions of ownership and intellectual property rights over AI creative works, and issues related to liability of AI systems, with intermediary liability perhaps at the forefront. While an analytical framework to categorize these legal questions is currently missing, one might consider a layered model such as a version of the interop “cake model” developed elsewhere in order to map and cluster these emerging issues.

Gesture Recognition by Comixboy, licensed under the Creative Commons Attribution 2.5 Generic license.

When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. As noted, AI-based technologies will affect almost all areas of society. From a legal and regulatory perspective, it is important to understand that new applications and systems driven by AI will not evolve and be deployed in a vacuum. In fact, many areas where AI is expected to have the biggest impact are already heavily regulated industries — consider the transportation, health, and finance sectors. Many of the emerging legal issues around specific AI applications will need to be explored in these “sectoral” contexts. In these areas, the legal system is likely to follow traditional response patterns when dealing with technological innovation, with a default on the application of existing norms to the new phenomenon and, where necessary, gradual reform of existing laws. Take the recently approved German regulation of self-driving cars as an example, which came in the form of an amendment to the existing Road Traffic Act. In parallel, a set of cross-cutting issues is emerging, which will likely be more challenging to deal with and might require more substantive innovation within the legal system itself. Consider for instance questions about appropriate levels of interoperability in the AI ecosystem at the technical, data, and platform layers as well as among many different players, issues related to diversity and inclusion, and evolving notions of the transparency, accountability, explainability, and fairness of AI systems.

Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. This information asymmetry also exists between the technical AI experts on the one hand, and actors in the legal and regulatory systems on the other hand, who are both involved in the design of appropriate legal and regulatory regimes, which points to a significant educational and translational challenge. Further, even technical experts may disagree on certain issues the law will need to address — for instance, to what extent a given AI system can or should be explained with respect to individual decisions made by such systems. These conditions of uncertainty in terms of available knowledge about AI technology are amplified by normative uncertainties: people and societies will need time to build consensus among values, ethics, and social norm baselines that can guide future legislation and regulation, the latter two of which also have to manage value trade-offs. Together, lawmakers and regulators have to deal with a tech environment characterized by uncertainty and complexity, paired with business dynamics that seem to reward time-to-market at all cost, highlighting the importance of creating highly adaptive and responsive legal systems that can be adjusted as new insights become available. This is not a trivial institutional challenge for the legal system and will likely require new instruments for learning and feedback-loops, beyond traditional sunset clauses and periodic reviews. Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.

The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions. Different legal and regulatory regimes aimed at governing the same phenomenon are of course not new and are closely linked to the idea of jurisdiction. In fact, the competition among jurisdictions and their respective regimes is often said to have positive effects by serving as a source of learning and potentially a force for a “race to the top.” However, discrepancies among legal regimes can also create barriers when harnessing the full benefits of the new technology. Examples include not only differences in law across nation states or federal and/or state jurisdictions, but also normative differences among different sectors. Consider, for example, the different approaches to privacy and data protection in the US vs. Europe and the implications for data transfers, an autonomous vehicle crossing state boundaries, or barriers to sharing data for public health research across sectors due to diverging privacy standards. These differences might affect the application as well as the development of AI tech itself. For instance, it is argued that the relatively lax privacy standards in China have contributed to its role as a leader in facial recognition technology. In the age of AI, the creation of appropriate levels of legal interoperability — the working together of legal norms across different bodies and hierarchy of norms and among jurisdictions — is likely to become a key topic when designing next-generation laws and regulations.

Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. In debates about the relationship between digital technology and the law, the legal system and regulation are often characterized as an impediment to innovation, as a body of norms that tells people what not to do. Such a characterization of law is inadequate and unhelpful, as some of my previous research argues. In fact, law serves several different functions, among them the role of an enabler and a leveler. The emerging debate about the “regulation of AI” will benefit from a more nuanced understanding of the functions of law and its interplay with innovation. Not only has the law already played an enabling role in the development of a growing AI ecosystem — consider the role of IP (such as patents and trade secrets) and contract law when looking at the business models of the big AI companies, or the importance of immigration law when considering the quest for talent — but law will also set the ground for the market entry of many AI-based applications, including autonomous vehicles, the use of AI-based technology in schools, the health sector, smart cities, and the like. Similarly, law’s performance in the AI context is not only about managing its risk, but is also about principled ways to unleash its full benefits, particularly for the social good — which might require managing adequate levels of openness of the AI ecosystem over time. In order to serve these functions, law needs to overcome its negative reputation in large parts of the tech community, and legal scholars and practitioners play an important educational and translational role in this respect.

Innovation by Boegh, Creative Commons Attribution 2.0 Generic license.

Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. Over the past two decades of debate about the regulation of distributed technologies with global impact, rough consensus has emerged in the scholarly community that a governance approach is often the most promising conceptual starting point when looking for appropriate “rules of the game” for a new technology, spanning a diverse set of norms, control mechanisms, and distributed actors that characterize the post-regulatory state. At a fundamental level, a governance approach to AI-based technologies embraces and activates a variety of modes of regulation, including technology, social norms, markets and law, and combines these instruments with a blended governance framework. (The idea of combining different forms of regulation beyond law is not new and, as applied to the information environment, is deeply anchored in the Chicago-school and was popularized by Lawrence Lessig.) From this ‘blended governance’ perspective, the main challenge is to identify and activate the most efficient, effective, and legitimate modalities for any given issue, and to successfully orchestrate the interplay among them. A series of advanced regulatory models that have been developed over the past decades (such as the active matrix theory, polycentric governance, hybrid regulation, and mesh regulation, among others) can provide conceptual guidance on how such blended approaches might be designed and applied across multiple layers of governance. From a process perspective, AI governance will require distributed multi-stakeholder involvement, typically bringing together civil society, government, the private sector, and the technical and academic community — collaborating across the different phases of a governance lifecycle. Again, lessons regarding the promise and limitations of multi-stakeholder approaches can be drawn from other areas, including Internet governance, nanotechnology regulation, or gene drive governance, to name just a few.

In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. The previous paragraph introduced a broader governance perspective when it comes to the “regulation” (broadly defined) of issues associated with AI-based applications. It characterized the law as only one, albeit important, instrument among others. Critics argue that in such a “regulatory paradigm,” law is typically reduced to a neutral instrument for social engineering in view of certain policy goals and can be replaced or mixed with other tools depending on its effectiveness and efficiency. A relational conception of law, however, sees it neither as instrumentalist nor autonomous. Rather, such a conception highlights the normativity of law as an institutional order that guides individuals, corporations, governments, and other actors in society, ultimately aiming (according to one prominent school of thought) for justice, legal certainty, and purposiveness. Such a normative conception of law (or at least a version of it), which takes seriously the autonomy of the individual human actor, seems particularly relevant and valuable as a perspective in the age of AI, where technology starts to make decisions that were previously left to the individual human driver, news reader, voter, judge, etc. A relational conception of law also sees the interaction of law and technology as co-constitutive, both in terms of design and usage — opening the door for a more productive and forward-looking conversation about the governance of AI systems. As one starting point for such a dialogue, consider the notion of society-in-the-loop. Recent initiatives such as the IEEE Global Initiative on Ethically Aligned Design further illustrate how fundamental norms embedded in law might guide the creation and design of AI in the future, and how human rights might serve a source of AI ethics when aiming for the social good, at least in the Western hemisphere.

As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run. The rise of AI leads not only to questions about the ways in which the legal system can or should regulate it in its various manifestations, but also the application of AI-based technologies to law itself. Examples of this include the use of AI that supports the (human) application of law, for instance to improve governmental efficiency and effectiveness when it comes to the allocation of resources, or to aid auditing and law enforcement functions. More than simply offering support, emerging AI systems may also increasingly guide decisions regarding the application of law. “Adjudication by algorithms” is likely to play a role in areas where risk-based forecasts are central to the application of law. Finally, the future relationship between AI and the law is likely to become even more deeply intertwined, as demonstrated by the idea of embedding legal norms (and even human rights, see above) into AI systems by design. Implementations of such approaches might take different forms, including “hardwiring” autonomous systems in such ways that they obey the law, or by creating AI oversight programs (“AI guardians”) to watch over operational ones. Finally, AI-based technologies are likely to be involved in the future creation of law, for instance through “rule-making by robots,” where machine learning meets agent-based modeling, or the vision of an AI-based “legal singularity.” At least some of these scenarios might eventually require novel approaches and a reimagination of the role of law in its many formal and procedural aspects in order to translate them into the world of AI, and as such, some of today’s laws will need to be re-coded.

Thanks to the Special Projects Berkman Klein Center summer interns for research assistance and support.

--

--

Urs Gasser
Berkman Klein Center Collection

Dean TUM School of Social Sciences and Technology, Technical University of Munich, previously Executive Director @BKCHarvard