does automation free us or enslave us?

Amy J. Ko
Bits and Behavior
Published in
4 min readJan 20, 2011

In his new book Shop Class as Soulcraft, Michael Crawford shares a number of fascinating insights about the nature of work, its economic history, and its role in the maintenance of our individual moral character. I found it a captivating read, encouraging me to think about the distant forces of tenure and reputation that impact my judgments as a teacher and researcher and to reconsider to what extent I let them intrude upon what I know my work demands.

Buried throughout his enlightening discourse, however, is a strike at the heart of computing — and in particular, automation — as a tool for human good.

His argument is as follows:

“Representing states of the world in a merely formal way, as “information” of the sort that can be coded, allows them to be entered into a logical syllogism of the sort that computerized diagnostics can solve. But this is to treat states of the world in isolation from the context in which their meaning arises, so such representations are especially liable to nonsense.”

This nonsense often gives machine, rather than man, the authority:

“Consider the angry feeling that bubbles up in this person when, in a public bathroom, he finds himself waving his hands under the faucet, trying to elicit a few seconds of water from it in a futile rain dance of guessed-at mudras. This man would like to know: Why should there not be a handle? Instead he is asked to supplicate invisible powers. It’s true, some people fail to turn off a manual faucet. With its blanket presumption of irresponsibility, the infrared faucet doesn’t merely respond to this fact, it installs it, giving it the status of normalcy. There is a kind of infantilization at work, and it offends the spirited personality.”

It’s not just a lack of accurate contextual information, however, that is missing from the infrared faucet, thieving our control to save water. Crawford argues that there is something unique that we do as human beings that is critical to sound judgment, but inimitable in machines:

“… in the real world, problems don’t present themselves in this predigested way; usually there is too much information, and it is difficult to know what is pertinent and what isn’t. Knowing what kind of problem you have on hand means knowing what features of the situation can be ignored. Even the boundaries of what counts as “the situation” can be ambiguous; making discriminations of pertinence cannot be achieved by the application of rules, and requires the kind of judgment that comes with experience.”

Crawford goes on to assert that this human experience, and more specifically, human expertise, is something that must be acquired through situated engagement in work. He describes his work as a motorcycle mechanic, articulating the role of mentorship and failure in acquiring this situated experience, and argues that “the degradation of work is often based on efforts to replace the intuitive judgments of practitioners with rule following, and codify knowledge into abstract systems of symbols that then stand in for situated knowledge.”

The point I found most damning was the designer’s role in all of this:

“Those who belong to a certain order of society — people who make big decisions that affect all of us — don’t seem to have much sense of their own fallibility. Being unacquainted with failure, the kind that can’t be interpreted away, may have something to do with the lack of caution that business and political leaders often display in the actions they undertake on behalf of other people.”

Or software designers, perhaps. Because designers and policy makers are so far removed from the contexts in which their decisions will manifest, it is often impossible to know when software might fail, or even what failure might mean to the idiosyncratic concerns of the individuals who use it.

Crawford’s claim that software degrades human agency is difficult to contest, and yet at odds with many core endeavors in HCI. As with the faucet, deficient models of the world are often at the root of usability problems and yet we persist in believing we can rid of them with the right tools and methods. Context-aware computing, as much as we try, is still in its infancy in trying to create systems that come remotely close in making facsimiles of human judgments. Our efforts to bring machine learning to the fold may help us reason about problems that were before unreasonable, but in doing so, will we inadvertently compel people, as Crawford puts it, “to be that of a cog … rather than a thinking person”? Even information systems, with their focus on representation, rather than reasoning, frame and fix data in ways that we never intended (as in Facebook’s recent release of phone numbers to marketers).

As HCI researchers, we also have some role to play in Crawford’s paradox about technology and consumerism:

“There seems to be an ideology of freedom at the heart of consumerist material culture; a promise to disburden us of mental and bodily involvement with our own stuff so we can pursue ends we have freely chosen. Yet this disburdening gives us fewer occasions for the experience of direct responsibility… It points to a paradox in our experience of agency: to be master of your own stuff entails also being mastered by it.”

Are there types of software technology that enhance human agency, rather than degrade it? And to what extent are we, as HCI researchers, furthering or fighting this trend by trying to make computing more accessible, ubiquitous, and context-aware? These are moral questions that we should all consider, as they are at the core of our community’s values and our impact on society.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.