Highlights from #WeRobot2017

This past weekend the latest iteration of the We Robot 2017 conference was hosted by the Information Society Project at Yale Law School in New Haven, CT. While I was not able to attend, I was able to participate remotely by tuning into both the live video stream (now archived and available) in conjunction with the active #WeRobot2017 Twitter stream.

With a focus on robotics law and policy, the papers presented offered a wide range of relevant and pertinent questions when it comes to the role of robots and automation in our society, and the impact this will have on law, regulation, and public policy.

What’s great about conferences like this is you get a preview of what these researchers are currently working on. With that said, the papers noted below have not yet been published, and the authors generally desire that you ask their permission before citing or circulating them. Therefore these are just highlights, notes on work currently being done in the field, and should be regarded as such.

Also note I’m grateful for the fantastic coverage of the event via Twitter by participants, in particular Amanda Levendowski. I’ll be highlighting some of her (and others) insights below.

Robot Referees and the Automation of Enforcement

Sports is an area in which the automation of enforcement is already happening and provides an accessible example to explore how this development impacts the broader system and society.

Sports offer us a focused reflection of social and political life: the issues of the day play out in the context of the football field, the tennis court, and the golf course. Though we often take note of how social issues — particularly political conflicts — inflect play and affect players, we less commonly use sports as a lens through which to explore broader political debates. Technological officiating offers us the opportunity to do so: by examining contention around the use of robot refs, we may obtain greater purchase on the social and cultural issues impacting automation policy more broadly. — Sporting Chances: Robot Referees & the Automation of Enforcement (DRAFT)

Taking Explanation Seriously in Law and Machine Learning

When examining the role and responsibility of robots and automated systems we depend upon the ability to understand and explain what these systems do. The danger of our current “Black Box Society” is that we often cannot explain or understand why these systems come to the conclusions that they do. This raises the challenge of how to regulate inscrutable systems:

From scholars seeking to “unlock the black box” to regulations requiring “meaningful information about the logic” of automated decisions, recent discussions of machine learning — and algorithms more generally — have turned toward a call for explanation. Champions of explanation charge that algorithms must reveal their basis for decision-making and account for their determinations. But by focusing on explanation as an end in itself, rather than a means to a particular end, critics risk demanding the wrong thing. What one wants to understand when dealing with algorithms — and why one would want to understand it — can vary widely. Often, the goals that motivate calls for explanation could be better served by other means. Worse, ensuring that algorithms can be explained in ways that are understandable to humans may come at the cost of another value that scholars, regulators, and critics hold dear: accuracy. Reducing the complexity of a model, for example, may render it more readily intelligible, but also hamper its performance. For explanation to serve its intended purpose and to find its appropriate place among a number of competing values, its champions need to consider what they hope it to achieve and what explanations actually offer. — Taking Explanation Seriously in Law and Machine Learning (DRAFT)

Robot Criminals, Judges, and Lawyers: A Discussion of Robotics and the Justice System

This panel was a whirlwind of profound and provocative concepts and arguments, ranging from how to deal with robots who break the law, whether robot lawyers will make the law more accessible, and what impact automated law enforcement may have upon society. Here are a few excerpts from the papers, followed by some of the Twitter coverage:

On Robot Criminals

The question of whether we should recognize robots as legal persons for the purpose of criminal law is likely to attract greater academic attention as smart robots become more integrated in our daily lives. This article seeks to show that smart robots can be treated as moral agents pursuant to several theories of moral responsibility. Moreover, the utility of imposing criminal liability on robots might outweigh its disutility under appropriate circumstances, in particular, where a robot is entrusted with the task of making moral decisions and where such tasks are sometimes performed by human beings. This conclusion raises many interesting issues for further research. For example, what types of decisionmaking powers can or should be delegated to robots? Which persons should be allowed to influence the algorithms of a smart robot? What types of duties (or rights) should be borne by robots that serve different social functions? — Robot Criminals (DRAFT)

On Robot Lawyers

Overturning parking tickets, improving lawyer efficiency, and reducing costs for law firm clients is just the beginning of AI’s potential in the legal profession. AI has the ability to expand access to legal services to parts of society that have historically been shut out. The demand for AI in the law is great, and the potential benefits are undeniable. However, AI’s transformation of the legal profession will not be without practical, moral, and ethical challenges. Because the future of legal services is one in which lawyers, AI services, and third parties will all likely be involved at some point in a large majority of cases, the legal profession must take a comprehensive approach to ensuring that AI is integrated responsibly, morally, and ethically into all forms of legal services. — Robot Lawyers (DRAFT)

On Automated Law Enforcement

In this piece, we step back and look at how automated enforcement might affect the law and its administration, apart from concerns about bias. We identify three potential problems. The first — which we call the “ontological hurdle” — has to do with the effects of automation on the law’s character as law. The second and third — which we call the “evolutionary hurdle” and “distributive hurdle” — point to its effects on the normative quality of the law (i.e., on whether our laws are good or just). — Three Normative Hurdles for Automated Law Enforcement (DRAFT)

The Effects of AI on Labour Markets

There’s considerable anxiety in society as expressed via the news media regarding the impact automation and artificial intelligence will have upon the workforce. This paper provides a great overview of both the discussion around what this may look like and more importantly possible policy responses like a UBI or Universal Basic Income and a “Robot-tax”.

Discussions on how to face problems regarding the changes within labor markets due to technology have so far been led by economists, who have mostly suggested a change within the social security system via the introduction of an unconditional basic income (UBI). Since Finland launched an experiment which examines the introduction of a UBI as a possible solution to these changes at the beginning of 2017, the first part of this paper analyses positive as well as negative aspects of the Finnish trial. Furthermore, certain legal aspects will be pointed out that generally have to be considered by national governments when introducing a UBI. The second part of the paper focuses on possible different designs of the so-called “Robot-tax” which recently has been one of the central point of debate in relation to the overtaking of typically human jobs by AI and their impact on state finances. — The effects of artificial intelligence on labor markets — A critical analysis of solution models from a tax law and social security law perspective (DRAFT)

Life, Liberty, and Trade Secrets

A growing issue when it comes to the role of automation in the criminal justice system is transparency. Whether the participants in the system will retain the right to understand how the system works and why the decisions made are what they are. This particular paper addresses this head on:

From policing to evidence to parole, data-driven algorithmic systems and other automated software programs are being adopted throughout the criminal justice system. The developers of these technologies often claim that the details about how the programs work are trade secrets and, as a result, cannot be disclosed in criminal cases. This Article turns to evidence law to examine the conflict between transparency and trade secrecy in the criminal justice system. It is the first comprehensive account of trade secret evidence in criminal cases. I argue that recognizing a trade secrets evidentiary privilege in criminal proceedings is harmful, ahistorical, and unnecessary. Withholding information from the accused because it is a trade secret mischaracterizes due process as a business competition. — Life, Liberty, and Trade Secrets: Intellectual Property In The Criminal Justice System (DRAFT)

Robots’ Place in the World

The final panel of day one of the conference was a great discussion and summary of some of the issues and impact that automation, robots, and AI may have on society, especially the issue of human rights:

Drawing together the increasing inadequacies of the contemporary human rights regime, the advantages of retaining and reorienting rights-based mechanisms, and the growing gap in control and responsibility introduced by robotics and AI, the need for developing a new human rights regime to ensure continued and sustained protections to the human being cannot be made more clearly. — A New Human Rights Regime to Address Robotics and Artificial Intelligence (DRAFT)

Feminist Perspectives on Drone Regulation

In recent years, the impact of drones on women, and in particular, women’s privacy, has sometimes gained sensational attention in popular discussions, from spying on sunbathing women, to delivering abortion pills to women who otherwise lack access. Nevertheless, the ways in which drone technology can enhance or undermine women’s privacy, and more specifically, the role the law might play in influencing this dynamic, has not yet received significant academic attention. This paper proposes to do just that. It takes a step back from sensationalized media stories to consider drone privacy issues through a gendered lens, permitting further analysis of the current North American approach to drone regulation. — Beyond Airspace Safety: Feminist Perspectives On Drone Regulation And Privacy In Public (DRAFT)

Automation and Regulation

Automation is and will continue to place stress and increase challenges for traditional regulators. This panel addressed some of the issues that arise when contemplating how to regulate robots.

Haptic Passwords

During lunch on the second day of the conference there was a demonstration by Howard Chizeck on research he’s been conducting around the concept of haptic passwords. A haptic password is a unique identifier derived from the way in which you write your signature (or tap on a screen). It was presented as an alternate form of authentication compared to passwords, although I suspect it has far more applications than just passwords, examples cited were online voting, transactions, and other exchanges that involve trust.

The Governance of Autonomous Weapons Systems

While automation and robots evoke anxiety for many people in general, it is the automation of weapons and warfare that may be one of the largest areas of concern. This made this session of particular interest, given the challenges in regulating the rise of automated warfare.

In this paper, I analyzed the role of private technology companies in the governance of autonomous weapon systems. By highlighting the dual-use character of technologies that enable increasing degrees of autonomy in weapon systems, I demonstrated the limitations of established arms control regimes and the regulatory capacity of states. These limitations and the progressing technology development, create a pressing need to consider the potential of other actors to contribute to the regulation of autonomous weapon systems. — The role of the private sector in the governance of autonomous weapon systems (DRAFT)

Robots and Regulation

This was another lightning panel that combined a range of different papers, all slightly touching upon either how robots are regulated, algorithmic contracts, or how law impacts the development or practice of said robots.

Robots and Privacy

Robots and Liability

Robots, Copyright and Bias

Innovative Solutions to Regulating AI

The final panel focused both on how to regulate artificial intelligence as well as how to find fault in autonomous systems. It also ended up stitching together threads from the conference as a whole. The paper on innovative solutions towards regulating AI provided a comprehensive framework within which to discuss various regulatory scenarios. The conclusion also offered a great summary of the challenge ahead:

Because regulators do not have the expertise, if we are ever to ensure AI is developed in a way that is beneficial for humanity, developers must acknowledge both their social obligation to share information (be transparent and accountable) with others, and critical importance of collaborations with thinkers from other disciplines. This is where we can go back to DeepMind — this is such a great example of developers building accountability into the system — we need to encourage this. This cannot be done only by regulators, but must be multidisciplinary and multi-stakeholder: often, developers themselves don’t know the right questions to ask. We need to empower civil society and researchers to raise new questions. We also need to empower research and black-box testing for the times when we know we are not going to get straight answers from developers for a variety of commercial and other reasons. We can and probably should also get better at regulating in the absence of perfect information. Risk-based approaches can help regulators identify where to spend their energy. — Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence (DRAFT)

--

--