Risk and Delusion in the Rational World of Machine Learning Engineers (5/7)

Katrin Fritsch
7 min readMar 17, 2019

--

Fifth part of the research project on the social imaginaries of artificial intelligence of machine learning engineers.

Katrin Fritsch

Artificial Intelligence is a Tool

One social imaginary that emerged in this research is that machine learning engineers envision AI as a tool. To see a technology as a tool is a relatively common-sense view in the history of technology, supposing that technology is a neutral object able to serve any end (Feenberg, 1999; Winner, 1980). AI only becomes an enhancement of whoever decides to use it, as one interviewee made clear:

“I guess, like I’ve said, it’s going to be more like a tool that you can either use for good or for bad. Like, it’s going to enhance what you’re capable of, and it’s up to the people who are using it to actually decide on the impact, right?”
(Interviewee 10)

Contrary to theories of the social construction of technology that aim to socialise and politicise a technology in any phase of development, and of any social actor, the social imaginary of AI as a tool is an anti-politicisation of AI in its engineering process.

The idea that AI is inherently neutral attempts to escape the subjective aspects of AI, and therefore, by extension, its politics.

It is not the machine learning engineers, because they only do “science for the sake of science” (Interviewee 6), but it is the intentions of the users of AI that politicise it. Hence, the social imaginary of AI as a tool rejects any prior social responsibility of the machine learning engineers in the development of the technology.

Neatly interlinked with the notion that technology is neutral are arguments around the industrial revolution and the inevitability of progress. This interlink between historical events that shape present conceptions of technologies is a central function of the social imaginary, linking the past with future mobilisations of technological innovation (Jasanoff, 2015). Through this historical argument, interviewees emphasised the similarities between AI today and the industrial revolution in the eighteenth and nineteenth century:

“Industrial revolution, they were probably saying as well, we’re loosing jobs to machines and we’re making cars now and, yeah, we’ve survived, right? So, I don’t think that there is ever a negative side to technological progress. Because all it can ever do is to augment our experience, right?” (Interviewee 9)

And, the interviewee continued:

“It’s a natural progression. And I think if you look back to today in fifty years and look at people complaining about digital healthcare assistants, they’re gonna laugh at the people who were complaining about it. In my opinion.”
(Interviewee 9)

Comparing AI today with the industrial revolution, and framing it under the normativity of progress, is what Andrew Feenberg (1999) terms “technological determinism” (p. 77). He sees it rooted in two premises: first, progress is conceptualised as unilinear, meaning that however technological developments advance, progress itself is not in question. Second, then, progress is seen as a technological imperative, which social actors have to accept and adapt to.

Through this notion, and born in the nineteenth century, follows the assumption that what can be done technologically, should always be done anyway (Postman, 1993). The faith in progress becomes a means to legitimise the engineering of AI, because machine learning engineers can present themselves only as “instances of ‘technological progress’” (van Lente, 2000, p. 60), rather than as individuals deciding on whether the technology should be innovated in the first instance. By serving progress they are taking part in what is inevitable anyway.

However, machine learning engineers also find themselves in a dilemma. Technological futures become “two-headed monsters. That is, they cannot be avoided because progress cannot be stopped, and they require efforts to be made because progress should not be stopped” (van Lente, 2000, p. 60). As a consequence, machine learning engineers emphasising their faith in technological progress, cannot help but have to self-fulfil their technological promises of the future.

The social imaginary of AI as a tool, embedded in the inevitability of progress, and rooted in the historical narrative of the industrial revolution, rejects any social responsibility and legitimises machine learning engineers in the innovation of AI. The political intentions are only on the user side, not on the engineers themselves.

Progress is not put in question — AI not only becomes desirable, but in fact inevitable.

Katrin Fritsch

Artificial Intelligence is an Assistant

The machine learning engineers interviewed also envision AI as an assistant, rather than as a replacement of human beings. They see the empowering aspect of AI in the mix between humans and machines, as one interviewee clarified:

“But I think at the more immediate level, what is required, is to build machines that can assist humans. Not just completely go away with humans, but keep humans for the properties and for the capabilities they are good for, and keep machines to complement them in their work. So, yeah, in that sense a mix of both of them.” (Interviewee 6)

The social imaginary of AI as an assistant allows to diminish any social question about humans getting replaced by machines. An assistant does not take the place of a human being, but rather helps in tasks and augments experiences. It also implicitly frames the envisioned structures of power of AI. If AI is only an assistant, it is located underneath human beings, performing tasks humans do not want to do. AI becomes a servant, an artefact that is not worth questioning, as it is the humans that are in control of it.

The social imaginary of AI as an assistant emerged within optimistic contexts of the engineers’ views of a technological future they are contributing to. These contexts included self-driving cars and healthcare, promising human freedom through notions of saving time, money, allowing them to focus, and having more comfort.

David Nye (2004) conceptualises this formulation of a technology as an assistant under a utopian vision of technologies, emphasising the ameliorative aspects of new machines improving everyday lives. Imagining AI as an assistant, one machine learning engineer said:

“So, you know, hopefully these things are gonna help us work less so we can focus on developing ourselves more. Learning more, progressing, growing more rather than doing a dumb, dull, yeah, to an extent dumb job.” (Interviewee 1)

The idea of offloading tasks and relieving humans is a relatively common assumption in discussions about automation (Elish, 2016). Considering the history of technology, technology was often assumed to bring human freedom and relief (Feenberg, 1999). Evgeny Morozov (2013) terms this “solutionism” (p. 6), a rhetoric that promises social solutions such as freedom or democracy through technologies, and hence obfuscating the actual social questions lying underneath it. Interestingly, some researchers have shown that these solutions often do not even become truth, and new technologies rather increase work time (Bainbridge, 1982; Parasuraman & Riley, 1997; Tenner, 1997).

Also, in the realm of healthcare specifically, AI was not only considered as an assistant allowing doctors to save time, but also as being able to evaluate diseases more accurately. Particularly image recognition for MRI scans was used as an example to underline this argument. Within this context, AI was envisioned as a machine that could save lives, because it does not entail human error, but has more precision in judging which diseases humans have and how they should be treated. What is neatly intertwined with the notion of AI as an assistant, is the argument that humans should be kept in the loop of AI systems in order to make sure machines work properly:

“But what I believe is good is if the machine is in the loop with the human. I don’t think they should be used independently at the moment.” (Interviewee 12)

To keep humans in the loop is another prevailing rhetoric in current discussions about humans and automation (Elish, 2016). It is particularly worth investigating, because it poses assumptions about who to hold accountable if automated systems fail. In her research Moral Crumple Zones, Madeleine Clare Elish (2016) shows, based on a case-study of the accident of the Air France Flight 447, how humans often serve as the bearers of the moral blame and face legal penalties for mistakes made by automated systems. She argues that by keeping humans in the loop, it is assumed that humans are able to supplement automation if needed, and if they fail to do so, they are the ones who, on a societal level, are held accountable.

The emerging theme that humans should be kept in the loop of the systems that machine learning engineers build, therefore, again shifts the responsibility away from the engineers and towards the human beings that are supposed to monitor and control these systems. Yet, in a highly “computerized society” (Nissenbaum, 1996, p. 25), it is impossible to only hold one human being accountable, because of what Helen Nissenbaum (1996) calls “the problem of the many hands” (p. 25). This means that several humans are creating and implementing technological systems and therefore should be held accountable if these systems fail.

The social imaginary of machine learning engineers of AI as an assistant is another technological vision, leading to different implications: it does not only promise human freedom and safety, but also legitimises the innovation of AI without questioning it, because AI would only serve, and not replace humans. Additionally, it is the human beings who are in the loop that should be held accountable if the system fails, and not the AI itself or the machine learning engineers that have built it. It is the power of the technological promises that enables the social action needed (van Lente, 2000). Consequently, a critical stance towards these promises is necessary, which leads to another social imaginary of AI as a delusion.

This is the fifth part of the research project Risk and Delusion in the Rational World of Machine Learning Engineers. To get an overview and more information about the publication please visit Part 1. If you wish to continue reading, go to Part 6.

Full citation: Katrin Fritsch, “Risk and Delusion in the Rational World of Machine Learning Engineers”, MOTIF Institute for Digital Culture, (March 16, 2019), https://medium.com/@katrinfritsch/risk-and-delusion-in-the-rational-world-of-machine-learning-engineers-1-10-e739df39056a

--

--

Katrin Fritsch

tech, climate + feminism / data + society / prev. co-founder of MOTIF Institute / @KatrinFritsch