Some starting questions around pervasive autonomous systems

Jonathan Zittrain
Berkman Klein Center Collection
4 min readMay 15, 2017

As part of the new Ethics and Governance in Artificial Intelligence project anchored by the Berkman Klein Center for Internet & Society and the Media Lab, I’ve been working on defining some of the cross-cutting questions arising from the mainstreaming of applied artificial intelligence. When does being able to offload thinking and decisionmaking to an automated process enhance our freedoms, and when does it constrain them? Under what circumstances could something be autonomy-enhancing for individuals, while constraining for society at large, and vice-versa? Are there ways to return to the states of uncertainty of a previous era, or does computability now mean that we’re compelled to allocate responsibility, either individually or as a society, for that which was previously rightly seen as a matter of fate?

I’ve been thinking particular of these three areas:

1. Predictive engines are getting better, thanks both to more data on which to cogitate and algorithms refined for more accurate cogitation. Those predictions might create a new basis for legal responsibilities, comfortably in the venerable legal doctrines of secondary liability such as that landlords might have for the use of their properties. It’s one step from “ought implies can” to “can implies ought”: with reasonably decent knowledge of consequences of various actions, to what extent will those possessing it — such as platform intermediaries — be said to have a responsibility to act to avoid bad consequences and bring about good ones? We see this in the realm of CVE — “countering violent extremism” — that governments are asking services like Facebook to take up. What impact would such responsibilities have on individual autonomy? Further, predictions from AI might not have an explanation under them not simply because the AI isn’t in a position to convey a theory, but because, in an important sense, there is not one. David Weinberger has recently compellingly summarized this possibility.

2. Sensitive governmental decisions impacting individuals are often vested with a single human decisionmaker, responsible for that decision and for providing some measure of due process, including pleading from affected stakeholders and explanation of outcome. Think parole, probation, or bail as one readily-considered example. We know those decisionmakers can engage in or fall prey to bias, and our accountability and appeal mechanisms are necessarily limited. AI, properly trained, offers the prospect of more systematically identifying bias in particular and unfairness in general. But any success may prove fertile ground for further issues, as there will be natural pressure to shift from AI as check on the human abuse of discretion to a simple substitution for its exercise. Should AI decisionmaking be resisted, and if so, on what grounds? Is there something dignity-depriving of being judged by a machine, even should the judgment be more fair or accurate? How could AI substitution be reviewed over time, lest it embed permanently the transient sensibilities of the era and context in which it was trained?

3. Much has been made of the moral puzzles that can arise with self-driving cars, such as when and how to avoid an accident if doing so occasions its own harm. Where such tough (if rare) decisions were necessarily previously left to individual human drivers — subject only to after-the-fact sanction through licensure or liability rules — there now can be rules or standards pre-loaded, with answers (however unsatisfying) to such things as the classic Trolley Problem, e.g. http://moralmachine.mit.edu/. Moreover, those rules can be updated or overridden remotely by the car makers and any who can influence or compel them, and can be made to vary by jurisdiction. When matters can less be left to chance, are we obliged to collectively seek and apply an ethical framework to address them? And to what extent should such a framework rely on a utilitarian basis vs. a democratic process basis vs. an individual autonomy one? To whom or what should our respective technologies be loyal?

These are issues that really aren’t limited to AI, despite the title of our project. They’ve been with us all along as polities have tried to create systems to channel and restrain the power of any one individual or group — think of a functioning institutional environment as an “autonomous system” — and in the creation of the corporate form, for which it parses to say that “Company X isn’t happy about the latest news,” even as no one person is Company X. These systems can provide continuity and facilitate ambitious long-term projects, and they can also create a sense of disempowerment and resignation among those who want to understand and change them.

--

--

Jonathan Zittrain
Berkman Klein Center Collection

Prof. @Harvard_Law, @HSEAS, @Kennedy_School + @BerkmanCenter for Internet & Society; @EFF board member; a small creature who likes to run around in universities