A-words: Accountability, Automation, Agency, AI

ThingsCon
The State of Responsible IoT 2018
14 min readAug 24, 2018

By Maya Indira Ganesh

The ThingsCon report The State of Responsible IoT is an annual collection of essays by experts from the ThingsCon community. With the Riot Report 2018 we want to investigate the current state of responsible IoT. In this report we explore observations, questions, concerns and hopes from practitioners and researchers alike. The authors share the challenges and opportunities they perceive right now for the development of an IoT that serves us all, based on their experiences in the field. The report presents a variety of differing opinions and experiences across the technological, regional, social, philosophical domains the IoT touches upon. You can read all essays as a Medium publication and learn more at thingscon.com.

In this essay I discuss approaches to accountability in human and non-human systems in contexts of error, breakdown and disaster. Machine learning and AI-related technologies are often applied to automated decision-making without an established review or audit process. In many situations they may be applied by people who do not have adequate knowledge of how they work. There are already instances of how these applications fail, resulting in discrimination and bias.

Accountability is a set of practices to understand how disasters, accidents and breakdowns have occurred in technical systems. Opening up a system to see how it works and identifying causes of error and breakdown can also feed into a productive forward-looking process of better design of the system. Accountability practices and approaches require an odd combination of skills: bureaucracy, investigation, and a deep and broad knowledge of how a system works and how it connects to other systems.

My interest in accountability in the AI context is part of an ongoing research project that investigates autonomy: Can you hold something that is not human and is autonomous accountable for something? What is autonomy then? How do various states of autonomy in non-humans and humans in a large, complex technical system result in errors and breakdowns?

In this essay I draw on cultural critiques of algorithmic culture, Science and Technology Studies (STS) ethnographies of infrastructure and disaster, and art and design. I end by suggesting that where accountability seeks clarity about constituent actors in a system and their interactions to understand how something occurred, not everything can be mapped and known.

Autonomous accounting

Consider the elaborate socio-technical architectures of a semi-autonomous car, a biometric border, a credit scoring algorithm or a lethal autonomous weapon. Each has similar components: data sets, programming architectures, commercial proprietors, workflows, front and back ends, middles, contracts, global trade flows, geopolitics, risk assessments, project managers, clients, suppliers, interfaces and dashboards, lawyers, engineers, local and global regulations, and specific industrial practices and their legacies.

When something goes wrong in such large technical systems, accountability cannot rest with a single individual. “Complex systems are rarely, if ever, the product of single authorship; nor do humans and machines operate in autonomous realms” (Schuppli 2014, 5). Shared and distributed accountability for errors in a complex technical systems is accepted in industries such as aviation (Galison 2014).

Yet, AI is imagined as somehow different. The popular imagination of AI conjures up a machine system that is somehow ‘autonomous’, atomised, singular, and capable of accounting for its actions, making moral decisions, or decisions in changing and uncertain circumstances. Conveying autonomy in a sense that “fetishizes individuality” (Fisch 2017, 122), AI systems are calibrated as autonomous through constructed measures such as ‘ethics’ or ‘intelligence’.

There is the ambitious imaginary of the fully autonomous vehicle that makes decisions for itself. Martha Poon refers to it as “the perfect neoliberal subject that tootles along making decisions for itself” (Ganesh 2017). This is the kind of object that James Moor refers to in his discussion of ‘explicit ethical agents’ (2006). This is also the vision we’re handed down through cinema and literature: the robotic, autonomous, ‘awesome thinking machine’ (Natale and Balatore 2017) modelled on humans that can be programmed as a force for good or evil, and makes decisions accordingly. A recent version of this is Eva in Ex Machina that models human cunning, deception and violence in order to survive. While current AI technologies are not at the Eva stage, it is important to acknowledge that the anxieties and drama associated with this new technology are part of its emergence (Bell 2018)

This explicit, self-accounting autonomous machine relies on the notion of ‘ethics’ which is leveraged variously as a measure, test or outcome: does the machine ‘have’ ethics? Can it ‘do’ ethics? The quest for software that makes decisions according to ethical principles has been in the works for some time. Referred to as ‘machine ethics’ its goal is

“to create a machine that’s guided by an acceptable ethical principle or set of principles in the decisions it makes about possible courses of action it could take. The behavior of more fully autonomous machines, guided by such an ethical dimension, is likely to be more acceptable in real-world environments than that of machines without such a dimension.” (Anderson and Anderson 2006, page 10)

A ‘machine guided by ethical principles’ in its decision-making is epitomised by the Trolley Problem as applied to future driverless cars. Anyone listening to a tech podcast or watching a TED Talk in the past few years has probably come across this thought experiment. It has entered mainstream awareness as a prescriptive suggestion for programming an autonomous vehicle to make a complex moral choice (known as ‘ethics’) about the value of life.

MIT’s Moral Machine project is an academic project based on an iteration of the Trolley Problem (Rahwan 2016). In this, the problem is gamified into scenarios involving a driverless car with failed brakes and a series of different human and non-human actors — legally or illegally — crossing at a crosswalk ahead. In some instances the driverless car has passengers. The question is the same: should the driverless car risk the life of someone or something on the crosswalk, or bring damage to itself or its occupants by avoiding them? The online game has generated 40 million responses from 3 million people in 200 countries and territories (Rahwan and Awad 2018). They believe this dataset could be the path to a “universal machine ethics” (ibid).

I read the application of the Trolley Problem in Computer Science projects as a “calculative device” (Amoore and Piotukh 2016) that transforms values about killing and dying into quantifiable metrics; and this is constructed as ‘ethics’. I believe that this accrues power to computation and invokes a kind of ‘cybernetic control fantasy’ that manages risk and produces a futurity in which the outcomes of a crash are foreseen, and then perfectly managed (Coleman 2016). This is a kind of perfect accountability, perhaps.

Patent for resolution of moral conflicts in autonomous operation of a machine. Weast et al 2017

People are Infrastructure (also)

It is going to be a while till we arrive at a fully autonomous vehicle; two decades possibly. Till then, what is really autonomous? Which human, or machine, is not embedded in a complex chain of actors and relations? Even a future fully autonomous vehicle will be entangled in a dense network of computer vision databases tagged and annotated by humans (something we are already doing by filling in CAPTCHAs), internet infrastructure to connect to the cloud, other vehicles, laws and regulations.

XKCD comic grabbed from the internet

In 2015 Shanghai-based designers Mathieu Cherubini and Simone Rebaudengo made a speculative object called the Ethical Fan. The Ethical Fan is a portable electric fan swivel-mounted on an input dashboard with dials and connected to the internet. When the fan is placed between two people, input buttons record ‘ethical’ information about the individuals such as their gender, education level and religion. The fan can compute who it should turn toward depending on the input. If it cannot decide because of a ‘conflict’, then this information is sent to a Mechanical Turk worker to resolve the question. The faraway Turker is also expected to offer a short justification for their choice. The results can be hilarious and nonsensical. For example, in one case the Turker says that the heavier of the two people should be fanned because fat people sweat more.

https://vimeo.com/116183361

Flow chart accompanying the video of the Ethical Fan offers a blueprint for how decision-making is constructed in the system.

The designers seem to want the results from the Ethical Fan to be ridiculous in order to raise questions about how, or if at all, decisions are made by machines. Like the original medieval Turk, there is a human inside the machine making a decision. But the Fan gives the impression of arriving at the decision itself.

Complex socio-technical systems need to be pulled apart. In understanding how they work, and who and what they are comprised of, we can identify how power flows through the system. This is relevant in cases of error and breakdown: what interactions between which agents resulted in decisions that led to collapse? But it isn’t always just that something breaks down because someone flipped the wrong switch. Technical disasters are often social, and incubate for long periods of time.

Diane Vaughan’s book (1997) about the 1986 Challenger explosion shows that the potential for a disaster matures through poor communication, organisational culture and social and political pressure. She identifies “scripts” — a way of talking about technical knowledge, essentially — that NASA engineers came to believe about the faulty design of O-Rings on the shuttle that led to the explosion. Vaughan found that technical information about what was risky in the O-Ring design was re-classified as not-risky as assessments of the design flaw traveled between engineers, bureaucrats and managers at NASA, amidst incredible pressure to win the Space Race. Infrastructure is people, and maintaining and managing complex technical infrastructure requires attentiveness to the interaction of human and non-human.

A cluster of initiatives around ‘algorithmic accountability’ are opening up the black box of algorithmic infrastructures. These include projects such as algorithm audits, protocols to standardise training data sets (Gebru et al 2018), public impact accountability practices (Reisman et al 2018), a government-convened algorithm review task force in New York City (Powles 2017), the Fairness Accountability and Transparency conference, and the 2018 Global Data Protection Regulation (GDPR).

The mental model of a computational system is usually:

input — black box / process — output.

Emphasising the inscrutability of the black box and opening it up is important work. But it is equally important to ask how accountability initiatives re-inscribe particular approaches to what exactly the problem is. It is possible that imagining the black box as where the problem lies feeds our assumptions about exactly what fails when an algorithm is discriminatory or biased — computational, sociological, legal, political or cultural, or some combination of these? There is a risk that algorithmic accountability remains a computational fix. Like the self-driving car’s morality algorithm, machine learning could well be programmed to regulate itself. Thus mechanisms of accountability need to themselves be interrogates: are they accountable too?

Recent organising and resistance among workers in Silicon Valley is an interesting development that complements initiatives for algorithmic accountability. Human workers are petitioning their employers — Google, Microsoft and Amazon — to not sell technology or expertise to government programs that are being used in the criminal justice, defence and immigration control. In the case of Google, the company withdrew from its discussions with the US Department of Defence on its drone program, Project Maven.

Accounting for the irregular

Mapping the vast technical systems of human and non-human agents has its limits. There is hubris at the heart of map-making: it is never possible to complete the map. Also, the map reveals the values and social position of the map-maker: what is considered worth mapping? What is left out?

Accountability mechanisms and approaches can be proactive: By understanding how a system works, and fails, there can be measures to improve its design. An ethnographer of infrastructure, Michael Fisch, and the artist and designer Ilona Gaynor push us to think about disasters in complex, large scale systems in terms of that which cannot be mapped: the irregular and the uncertain.

Nuclear reactors are “absolutely determined technologies” meaning that every part of the operation must be carefully mapped and regulated in order to foresee and manage errors. Any nuclear accident is a significant disaster. A nuclear reactor is inert and ‘finished’ once it is complete because it cannot remain open; changes can introduce instability that might affect the fragile chemical processes at the core of the reactor. In his exploration of accounts of the 2011 TEPCO (Tokyo Electrics) Fukushima nuclear reactor disaster, Michael Fisch finds that the word soteigai was invoked by TEPCO as the cause of the disaster. “Soteigai translates loosely as referring to something that is beyond expectations. Accordingly, it is commonly understood to denote something that can not be anticipated via existing risk management models and technologies.” (p 1)

But, Fisch pushes past this explanation, showing that negligence and corporate mismanagement aside, eventually it was the closed nature of the system that led to the failure. Unlike an organic system that can evolve, a nuclear technology is a closed and tightly coupled system. Anything irregular — in this case, a tsunami that exceeded existing data about the effects of tsunamis — cannot be accounted for within the system. He concludes that soteigai was never about limits in thinking about the possible causes and contingencies of failure of the system, but that “it has always been about the impossibility of thinking the consequences of the nuclear crisis.” (p 6)

Everything Ends in Chaos (Gaynor 2011) touches on similar themes of the limits of what can be known or imagined. This finely detailed work emerged in the aftermath of the 2008 financial crisis so it spans economics, finance, global markets, risk management, insurance and mathematics. Gaynor reverse-engineers fictional global catastrophes through various scenarios: one starts with the kidnapping of a wife of a wealthy senator; a second is about a bomb in the boardroom of an insurance firm. Reminiscent of 1950s scenario planning (Galison 2014) she traces the ‘what if’ path to understand how imaginable and unimaginable events might be predicted, managed and reversed. Inspired by the idea of a ‘Black Swan’ event, Gaynor asks how we might know and manage risk and disaster through instruments of precision which themselves may not be necessarily precise.

She says that designing exaggerated and hypothetical scenarios reveals how certain systems work, as well as how future economic and financial systems might be re-designed. Gaynor says:

“I do think that as its complexity continues to grow and get increasingly denser, it starts to tangle and knots occur. It’s becoming more and more difficult to control such a living organism and I don’t think we can continue down a pathway that’s so obviously treacherous. The critical discourse lies in my aim to celebrate such a system. It’s a non-human entity with non-human goals, and it’s deliciously destructive.” (deBatty 2011)

In thinking about complex and large architectures in systems like AI, Fisch and Gaynor ask that we identify the limits imposed by technical system themselves, and in our own thinking, about causes and consequences of breakdowns and errors. This may take us to a beginning, not an end, of where we might articulate ethics:

“An account of oneself is always given to another, whether conjured or existing, and this other establishes a scene of address as a more primary ethical relation than a reflexive effort to give an account of oneself. Moreover, the very terms by which we give an account, by which we make ourselves intelligible to ourselves and to others, are not of our making. It may be possible to show that the question of ethics emerges precisely at the limits of our schemes of intelligibility, the site where we ask ourselves what it might mean to continue a dialogue where no common ground can be assumed, where one is, as it were, at the limits of what one knows yet still under the demand to offer and receive acknowledgment” (Butler 2005, p 20–21).

References

Amoore, L. and Piotukh, V. (2016).Eds: Algorithmic Life: Calculative Devices In The Age Of Big Data. London and New York: Routledge

Anderson, M. and Anderson, S.L. (2007). The status of machine ethics: a report from the AAAI symposium. Minds and Machines 17: 1–10.

Butler, J. (2005) Giving an account of the self. New York: Fordham University Press.

Coleman, R (2016) ‘Calculating Obesity, Pre-Emptive Power And The Politics Of Futurity’ in Amoore and Piotukh (eds) Algorithmic Life: Calculative Devices In The Age Of Big Data. London and New York: Routledge. p 185

deBatty, R. (2011) Everything ends in Chaos: Interview with Ilona Gaynor. We Make Money Not Art. Retrieved 1 August 2018 http://we-make-money-not-art.com/everything_ends_in_chaos/

Fisch, M. (n.d) Meditations on the Unthinkable (soteigai). In Erez Golani Solomon, Editor. The Space of Disaster. Tel-Aviv, Resling Publishing. Retrieved 1 August 2018 https://anthropology.uchicago.edu/people/faculty_member/michael_fisch/

Fisch, M. (2017) Remediating infrastructure: Tokyo’s commuter train network and the new autonomy in Infrastructures and Social Complexity: A companion by Penny Harvey, Casper Bruun Jensen and Atsuro Morita (eds). London and New York: Routledge.

Ganesh, M. I (2017) In a personal conversation with Martha Poon in Brussels, January 2017.

Galison, P. (2000). “An Accident of History.” Ed. Peter Galison and Alex Roland. Atmospheric Flight in the Twentieth Century. Springer Science and Business Media: 3–43. Vaughan, D. (1997). The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA. Chicago: University of Chicago.

Galison, P (2014). The Future of Scenarios: State Science Fiction In The Subject of Rosi Braidotti: Politics and Concepts, edited by Bolette Blaagaard and Iris van der Tuin, 38–46. London and New York: Bloomsbury Academic.

Gebru, T., Morgenstern, J., Vecchione, B., Wortman, J., Wallach, H., Daumeé , H, and Kate Crawford. 2018. Datasheets for datasets. Retrieved 1 August 2018: https://arxiv.org/abs/1803.09010

Johnson D.G. (2011) Software Agents, Anticipatory Ethics, and Accountability. In: Marchant G., Allenby B., Herkert J. (eds) The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. The International Library of Ethics, Law and Technology, vol 7. Springer, Dordrecht

Moor, J. (2006) The Nature, Importance, and Difficulty of Machine Ethics. Machine Ethics: IEEE Intelligent Systems.

Natale, S. and Balatore, A. (2017) Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence: The International Journal of Research into New Media Technologies. 1–16 Retrieved DOI: 10.1177/1354856517715164. 22 January 2018

Powles, J. (2017) New York City’s Bold, Flawed Attempt to Make Algorithms Accountable. New Yorker December 20, 2017. Accessed online https://www.newyorker.com/tech/elements/new-york-citys-bold-flawed-attempt-to-make-algorithms-accountable 5 January 2018

Rahwan, I and Awad, E (2018). The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine Ethics. Invited talk at the Artificial Intelligence and Ethics Conference, AIES, New Orleans, LA, February 1–3, 2018. Retrieved 2 August 2018 http://www.aies-conference.com/invited-talks/

Rahwan, I (2016) The Social Dilemma of Driverless Cars. Tedx Cambridge. Retrieved https://www.youtube.com/watch?v=nhCh1pBsS80 14 November 2017.

Reisman, D., Schultz, J., Crawford, K and Meredith Whittaker (2018) Algorithmic Impact Assessments. A Practical Framework For Public Agency and Accountability. AI Now Institute. https://ainowinstitute.org/aiareport2018.pdf

Schuppli, S. (2014) Deadly Algorithms: Can Legal Codes hold Software accountable for Code that Kills? Radical Philosophy, Issue 187 UK, (2014): 2–8. Accessed online http://susanschuppli.com/writing/deadly-algorithms-can-legal-codes-hold-software-accountable-for-code-that-kills/ 12 February 2018

Vaughan, D. (1996) The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA. Chicago: University of Chicago Press. ISBN 0–226–85176–1.

Weast, J.C., Kohlenberg, T.M., Johnson, B.D (2017) Technologies for resolving moral conflicts during autonomous operation of a machine. US Patent application publication number US2017/0285585 A1 Oct 5, 2017.

Maya Ganesh

Maya Ganesh is a technology researcher, educator and writer who works with industry, arts and culture organisations, academia and NGOs. She is working on a PhD about autonomy, ethics and AI. Her other research interests include: design; financial technologies; post-humanism; and the contested term ‘Anthropocene’. She has worked with Tactical Technology Collective, Point of View Bombay, UNICEF India, and the APC Women’s Rights Program. Her writing and publications are here. She tweets @mayameme.

ThingsCon is a global community & event platform for IoT practitioners. Our mission is to foster the creation of a human-centric & responsible Internet of Things (IoT). With our events, research, publications and other initiatives — like the Trustable Tech mark for IoT — we aim to provide practitioners with an open environment for reflection & collaborative action. Learn more at thingscon.com

This text is licensed under Creative Commons (attribution/non-commercial/share-alike: CC BY-NC-SA). Images are provided by the author and used with permission. Please reference the author’s or the authors’ name(s).

--

--

ThingsCon
The State of Responsible IoT 2018

ThingsCon explores and promotes the development of fair, responsible, and human-centric technologies for IoT and beyond. https://thingscon.org