The Action of Inaction: Design for Situated Cognition

As designers of interactions and experiences, we are concerned with the interface between the human and the technological (however those terms are construed). When we talk about “Computer/Human Interaction” we are considering the interface and interaction between two separate entities, the human and the technological. We go to great lengths to understand the technological aspects of our work such as programming, system architecture, networks, etc. Yet, we are deeply limited in understanding the other half of the equation, the person ultimately using our designs. Sure, we do user research involving methods like mental models, user stories, use cases, flows, etc. These tools help us understand and delve into the psychology of our audience, but even user research is tactical in nature; we are less concerned with a user’s internal, cognitive processes then with their more observable and hence quantifiable behavior.

While some designers would likely quibble with this assertion, there are many valid reasons to work this way. Primary among these reasons is that most of us work within strict guidelines and deadlines. Our prototypes have empirically testable attributes and must meet a certain set of given criteria. Considering these real-world constraints, we are primarily concerned with a user’s cognitive processes only insofar as they impact her ability to interact with our prototype and accomplish a set list of user tasks. But what if the experiences we create are far more instrumental in the lives of our users than we have traditionally considered? How would we change our approach if our products were coupled to the user’s cognitive process the way a pacemaker is coupled to their physiological processes?

Consider this; Otto has Alzheimer’s Disease. Because of his failing memory, he carries a notebook in which he writes down any fact, detail or other information that he finds relevant. He always has the notebook with him and trusts its contents implicitly. One day, Otto decides to go to the Museum of Modern Art (MoMA) in New York City. He consults his notebook and finds that MoMA is located on 53rd street just off 7th Avenue. (Clark and Chalmers, 1998). Based on this example, could it be said that Otto’s cognitive system does not reside entirely within his brain? That is, could the paring of Otto and his notebook comprise a cognitive system? The debate about where cognition occurs and what comprises it is the hallmark of the situated cognition movement. While there are many flavors of situated cognition, the Otto example illustrates a school called extended cognition.

Very briefly, there are three primary schools of thought within situated cognition: extended cognition (the most radical), embedded cognition and embodied cognition. To oversimplify, extended cognition believes that there is no meaningful boundary between the agent and it’s environment. In other words, that cognition can extend beyond the bounds of the individual and into the users environment. Embedded cognition believes that the cognitive process is embedded in the users environment, but that the actual process of cognition does not extend outside the bounds of the individual. The embodied view is agnostic on where cognition actually occurs, but believes that the environment is instrumental in cognitive processes (think linguistic concepts that rely on physicality, e.g. the desktop metaphor or the phrase “putting your best foot forward”) For an in-depth examination, check out Rob Rupert’s Cognitive Systems and the Extended Mind (Rupert, 2009).

But so what? What significance does situated cognition have for design? If we pragmatically accept that extended cognition exists, then it means that the devices, interactions and experiences we design are far more than just a product distinct from the end user; they are an instrumental part of that users cognitive system. I do not personally believe that extended cognition like the Otto example exists. However, my point is not to argue about the locus or nature of cognition, but to examine its possibilities from a design perspective.

Despite the misgivings of a few (mine included), our culture has tacitly accepted that technology constitutes a critical addition to cognition. It is critical in the sense that if it were to suddenly cease to exist, we could be considered cognitively impaired because we would have to learn (or relearn) certain functions in its absence. For example, if pencils (and other analogues like pens, etc.), which are instrumental in doing long division were to disappear, we would need to either learn to do long division in our heads, or invent another tool capable of externalizing long division.

As I noted above, it’s disingenuous to suggest that we don’t consider the cognitive processes of our users (that is the main function of mental models, after all). What I’m suggesting is that mental models are only the tip of the iceberg. Methods like contextual inquiry are great for observing interaction, but how well do they measure concepts that could demonstrate cognitive-coupling? (More on this later) In short, our methods are great for designing systems that “satisfice”, but not as useful in designing to fully engage a user at the level of cognitive coupling. Given limited resources, this approach may be sufficient for systems that don’t require a deep level of engagement. But wouldn’t an experience that was engaging on the level of cognitive coupling be superior for certain applications? “Don’t make me think” might be a good credo for design, but it shouldn’t be construed as “don’t make me engage”. Again, there are likely to be legions of cognitive scientists, philosophers, psychologists and others lining up to dispute this.