Contextualizing User Action
A Review of Plans and Situated Actions: The Problem of Human-Machine Communication
Every year or so I get asked to give a talk about User Experience to Computer Scientists. At the beginning of the talk I don’t start by defining UX. Rather, I start by explaining how UX grew out of AI; how the whole field started as a reaction. I begin my talk with history because so much of the current zeitgeist within computer science is the thought that if we just throw machine learning at a problem that suddenly all world problems will be solved. For instance, I can’t go to a single work presentation these days without someone talking about how we don’t really need users anymore because we’re just going to make a neural network of images and train the model and VOILA!, everything is the betters. I like to ask, how did that work out for John Connor?
This kind of reductionist thinking is very reminiscent of the 1970s and early 1980s; the parallels are worth reviewing. At that time the power of computers was really starting to astonish people. We were past the space race and applying computer technology to additional domain areas. Computer Science also started building out second and third generation computer languages. This, paired with the rise of personal computing power, fruited a thought that we could program computers to do all tasks with little consideration to how users would actually interact with the machines. User interaction was a hand wave — by that I mean that UIs were the lowest on the list of priorities.
But in the 1980’s a bunch of psychologists, anthropologist, and sociologists got together to form an anti-AI cabal. There were many key figures in that cabal, but one that has a special place in my book-shaped heart is Plans and Situated Actions, by Lucy Suchman. This book deeply affected the status quo and was part of the basis for the creation of the whole field of User Experience (or otherwise known as Human-Computer Interaction). You can hear Dr. Suchman speak about her own start here.
In Plans and Situated Action, Suchman looks at how we program software — specifically robots. She says that when we program a robot, we tell it to go left and then right, and then left again, while traversing through a maze. If we give the robot specific instructions on how the maze is laid out, it will be able to get to the other end of the maze without fail. Yay robot! The problem is that life isn’t a simple maze. How we tackle problems, Suchman argues, is inherently tied to the context we find ourselves in moment by moment. And, the actions that we take in those moments are situated to what is possible and useful to get the desired outcomes. The key here is that context is never the same. Like a moving river you may dip your toe in, each time you dip your toe in it touches a different river. Essentially, she argues that we cannot enumerate our plans or our software with enough granularity to ever account for context shifts.
This book is foundational to the field and should be read by every UXer. It is relevant for the historical context but also because the argument in it is sooooo stupid relevant to the arguments being made today for machine learning. Machine learning is only one part of any solution. A cool part, but it is only one tool in a vast toolbox; one part of the solution-ed system. Here Suchman argues that we must think beyond the narrow confines of the technology, but instead the whole environment and system when creating new designs:
“[The] dynamics of computational artifacts extend beyond the interface narrowly defined, to relations of people with each other and to the place of computing in their ongoing activities. System design, it follows, must include not only the design of innovative technologies, but their artful integration with the rest of the social and material world.”
It would be impossible to get a developer these days that is designing a new machine learning model to think about the ‘social and material world’ that it would be within; yet, as Suchman argues throughout the book, to not do so means that the output is unlikely to be fruitful or useful.
I’m not saying that we are getting ready for another pendulum swing back. After all, at least UX is a recognized (if underappreciated with too many people thinking it is just ‘common sense’) field. But I would be remiss if I didn’t gently raise this book up as a reminder of what happens when the user is forgotten or set aside as invaluable. Here is perhaps the most famous quote from the book to foot-stomp that point:
“The efficiency of plans as representations comes precisely from the fact that they do not represent those practices and circumstances in all of their concrete detail. So, for example, in planning to run a series of rapids in a canoe, one is very likely to sit for a while above the falls and plan one’s descent. The plan might go something like “I’ll get as far over to the left as possible, try to make it between those two large rocks, then backferry hard to the right to make it around that next bunch. A great deal of deliberation, discussion, simulation, and reconstruction may go into such a plan. But however detailed, the plan stops short of the actual business of getting your canoe through the falls. When it really comes down to the details of responding to currents and handling a canoe, you effectively abandon the plan and fall back on whatever embodied skills are available to you… the purpose of the plan in this case is not to get your canoe through the rapids, but rather to orient you in such a way that you can obtain the best possible position from which to use those embodied skills on which, in the final analysis, your success depends.”
Plans represent ideals. Plans represent best cases. Plans are representations. They are not the same as the situated action that we take in response to the context around us. As long as we continue to design software and solutions according to plans without understanding and accounting for contextual differences, software will fail — with or without machine learning.
On a different note, in many ways I had a very privileged upbringing; my parents always supported me learning about programming and building computers. When I got to college, I didn’t want to study computers because I found them so easy. Instead, I wanted to study and understand humans. For me, the really tricky part of computers was knowing what was the right thing to create. I understood intrinsically that there was a gap between being able to create software and being able to create usable software. I ended up majoring in computer science and psychology not even knowing that there was a whole field dedicated to the combination.
When I first discovered UX, and then soon read this book, I felt like there were whole worlds opening up to me. It was like looking into a mirror for the first time and seeing not only yourself but never looking at things the same way again. This book shifted my foundation by giving a voice to thoughts and feelings I’d had for years but never been able to express. It didn’t matter that the book was almost as old as I was. I was just so pleased to have found the well written argument for why computer science must account for not just the user, but also all of their contextual needs. Reading the book was life changing for me.