Designing for the Human Scale

Andy Fitzgerald
27 min readMar 15, 2017

This article was first presented as the opening keynote at World IA Day Zürich in February 2017.

In 1958, mathematics genius and digital computing pioneer John Von Neumann published a short book called The Computer and the Brain. Von Neumann makes the argument that the human nervous system is fundamentally digital, drawing on exhaustive parallels between the computers of the day and the structures of the human brain.

Today, I would like to talk with you about the profound influence this metaphor has had not only on the way we think about how our brains operate, but also the influence it has had on how we design for and interact with the technology that is increasingly intertwined with our lives. These factors combined ultimately influence how we design for and interact with each other.

The Human Scale

As information architects, we’re responsible for building shared information environments for people to use. In order to accomplish this, untangling the needs of our users from the requirements of technology must be the first step in this process, and something that’s always on our minds.

In The Way We Think, Gilles Fauconnier writes that

“we divide the world up into entities at the human scale so that we can manipulate them in human lives.”

This is what I mean when I say, “the human scale.” This is also what models like the “mind as computer” metaphor work to accomplish. As complex as they have become, the basic operating methods of computers (know as “the Von Neumann architecture”) give us simplified ways to conceptualize human perception, comprehension, and decision-making.

As obvious as this may appear to us now, this metaphor one in a long line of similar “brain as _____” metaphors. With the invention of hydraulic engineering in the third century, for example, a hydraulic model of human intelligence emerged. This is one where the flow of different fluids in the body, “the humors,” accounted for both our physical and our mental functioning. This model persisted for more that 1600 years — and surely caused the early and agonizing deaths of countless millions.

In the 1500’s, automata powered by springs and gears had been devised. This inspired thinkers such as Renée Descartes to assert that humans are complex machines. In the 18th and 19th centuries, electricity and communication technology provided the dominant metaphor. German physicist Herman Von Helmholtz, for instance, compared the brain to a telegraph.

Each model, unsurprisingly, is a reflection of the new and revolutionary technology of its day. Technology designed, ironically, precisely to do something that humans are intrinsically bad at: the force of hydraulics, the precision and regularity of automata, the long-distance fidelity of telephony. Each one, however, within its limits, allowed people to conceptualize and discuss the complex processes of cognition more clearly.

Today these models feel naive, quaint, or silly, of course — just as our computational model of the mind will feel to future generations. And just like these earlier models, while the computational model offers some truly innovative ways to discuss human perception, cognition, and decision-making, it likewise has its limits. Computers, after all, are designed precisely to do the things that we’re not good at doing.

Just like these earlier models, when we try to extend the mind-as-computer model beyond its effective scale, we end up with results that are sometimes merely ineffective, sometimes ridiculous, and sometimes dangerous.

My goal today is to tease apart the impact this computational metaphor of the mind has on how we create shared information spaces for people, and how we as information architects can bring a better understanding of this model’s insights and limitations to our design work.

I’ll do this by focusing on three key topics:

  • Embodiment
  • Magnitude
  • Belief

These are areas where in my study of perception, cognitive psychology, and complexity and systems theory I’ve seen the work we do as designers all too often favor the needs of machines at the expense of the people for whom we design. I’ll also, along the way, offer some tips and guidelines for how to recognize gaps in scale and how to create and advocate for better design decisions.

Embodiment

One major impact of the brain-as-computer metaphor is the minimization or elimination of the importance of the body to thinking. This is already present in the Cartesian tradition of the separation of body and mind as distinct elements and is reinforced by the transferability of “knowledge” from computer to computer (i.e. “programs”).

In it’s simplest form, according to psychologists Andrew Wilson and Sabrina Golonka, embodiment is the idea that

“cognition spans the brain, body, and the environment. [It] is an extended system assembled from a broad array of resources.”

The computational metaphor encourages us to think of the brain and the body as a CPU workstation and peripherals. And it encourages us to think of these as separate, independent components. For machines, of course, this is true: these components are largely interchangeable. For humans however, these elements are not functionally distinct. Thinking doesn’t stop at our brain stem: our bodies and environment are deeply involved in the process.

This is where fantasies of uploading one’s mind to a computer are, from a cognitive psychology perspective, incredibly naive. Even if we accept that the brain is a pattern, and that that pattern can be replicated elsewhere (i.e. without a body), this pattern would mean something very different — or nothing at all. As research psychologist Robert Epstein puts it, “The vast pattern of the brain would mean nothing outside of the body of the brain that produced it.”

The reason for this is that our bodies provide the initial foundational structures upon which the rest of our thought processes are built. Architecture provides an easy example of how our bodies structure the way in which we operate in the world. Kitchens, for example, are designed to accommodate the human scale in a very direct way.

There’s a concept called the “kitchen triangle,” wherein the refrigerator, the stove, and the sink can be no further than 9 feet apart, but no closer than 4 feet together. Additionally, the interior of this triangle has to be unencumbered; we need to be able to move freely through this space.

These measurements aren’t dictated by building codes or safely considerations. They’re required by the length of our limbs and our ability to twist and move, the measure of our stride. You can still make fondue in a kitchen built to the wrong proportions, but it will be harder. And you probably won’t notice that the environment is to blame. You’ll more likely feel like you’re doing something wrong, or that it’s simply just “not working.”

When we design for information spaces, the way that our bodies operate in the world is equally important. For more abstract concepts, however, this idea can be a bit harder to tease apart.

In his seminal work on perception and cognition, The Ecological Approach to Visual Perception, JJ Gibson writes that “to perceive the world is to co-perceive oneself.” The awareness of the world and of one’s complementary relations to the world are not separable.

Gibson means this in a holistic sense. For Gibson, perception is based on perception of and movement in the world, proprioception (the awareness of our bodies in space), and sensation (i.e. the feedback we get from the environment).

The wholeness that we perceive in Gestalt visual phenomenon, for example, is co-perception of the self. We see wholes in collections of objects that are perceived to be at the human scale. This is what Kurt Koffa call, “the tendency to see the whole as other than the sum of its parts.” Gibson and others argue that this isn’t a cognitive process that happens in our brains, but rather a perceptive process tied to embodiment that filters information in our environment as it is observed.

Cognitive linguist George Lakoff argues that this embodied perception and awareness of our complementary relation to the world results in a set of pre-conceptual structures that operate as the building blocks for how we make sense of everything else. Gestalt phenomenon are one set of such preconceptual structures.

A second and equally important set of preconceptual structures are what Lakoff calls “image schemas.” These are the relatively simple structures that occur regularly in our everyday lives and that we use to structure the rest of our thought.

The container schema, for instance, stems from how we understand our own bodies. The most basic things we do are ingest and excrete, inhale and exhale, keep the insides in, keep the outsides out. This allows us to define a clear embodied sense of interior, exterior, and the boundary between them.

This foundational schema also provides the pre-conceptual framework that structures our thoughts, even around the most basic daily experiences. Lakoff’s co-author, Mark Johnson, gives a brief example in the first few minutes of an ordinary day:

“You wake out of a deep sleep and peer out from beneath the covers into your room. You gradually emerge out of your stupor, pull yourself out from under the covers, climb into your robe, stretch out your limbs, and walk in a daze out of your bedroom and into the bathroom.”

… and it goes on from here. Our days proceed like this: we figure out, work out, dial in, incorporate, come into, go out of sight.

Containers also structure the logic of classes. Putting things or concepts into categories is using the container schema. This is the structure behind how we think, how we formulate simple and complex ideas. It’s deeply intertwined with our personal experience in our own bodies.

Another critical pre-conceptual structure is the part-whole schema. Our whole lives are spent with an awareness of both our wholeness and our parts. Consider the noun “individual,” or the pronoun “one.” Or the way that we refer to our limbs as our “body parts.”

Lakoff specifies that the relationship between part and whole is asymmetrical. A whole implies the existence of the parts, but all the parts alone don’t equal a whole. Here we can see a deeper root to the gestalt phenomena we looked at just a minute ago: the part-whole structure is something that we understand because of our bodies.

This understanding runs so deep that any unintended violation of the whole tends to be pretty troubling. This is actually a critical piece of the embodiment puzzle. Our preconceptual structures resonate on a physical and an emotional level. Though I’m analyzing them with you now, none of us needs an explanation to understand them.

Machines, of course, are also subject to these rules of part-whole operations: a pile of parts is not the same thing as a working iPhone. While container and part-whole schemas are cornerstones of our pre-conceptual processes, however, viscerally understood through our embodied experience, digital systems use a very different foundational process based on binary operations. Everything evaluates ultimately in a digital system to true or false. As James Gleick puts it, “No matter which language you use, they all are reducible to the language of a universal Turing machine.”

With this knowledge of the foundational differences of pre-cognitive structures of humans and machines, we can start to see where matters that we might otherwise simply dismiss as usability problems are actually the result of structural biases at the heart of our designs.

This, for example, is the Starbucks marketing website as it appeared this most recent holiday season. We can see here a very standard horizontal menu with a logo, navigation, shopping cart, and a store locater. In the menu we’ve got coffee, tea, coffee, menu, coffeehouse, blog, and shop. If we open up the shop menu, we also see coffee and teain this menu (Starbucks, after all), and then gifts, equipment, and drinkware, which, if you’re in a Starbucks coffee house are also on the shelves all around you.

If you click on any one of these links, you’ll end up on a similar page — the hero image has changed, but it’s got the same navigation and wayfinding elements. And it’s still got coffee, tea, drinkware, and equipment in the nav bar. But here, if we click on coffee, we’re now in a wholly different place. All these menu items are different, and where I can go has changed. We’ve been transported, and moved in a way that is inconsistent with our fundamental models of navigating space in the world.

Our sense of where the parts and where the whole is of the Starbucks site is confounded. We initially understand a whole composed of component parts. But one of those parts has become a new, sibling whole, on the same hierarchical level as its origin. The effect is that what is presented as a single space is experienced at the human scale as multiple and fragmented. This is a marketing site, an in-store experience, and an online shop. But shop, coffeehouse, store, coffee, tea, and product categories and labels are used without discrimination across the experience. Parts and whole are mixed, contained, and ruptured in a half-dozen clicks.

Digitally, there’s nothing wrong with this. Polyhierarchy scales to digital systems just fine. Computers aren’t bothered by needing to keep track of multiple locations of items — in fact, it’s one of the things that they do best. Unsurprisingly, this isn’t the case for humans.

I’ve spent some time figuring this site out, and my guess is that this is the result of a collision between a marketing team and an online store, and that the solution was just to make the site work. In the same way that you can still cook in a bad kitchen, you can still learn about and shop at Starbucks here.

The issue is not, however, that we can’t figure this out: the issue is that the site is poorly scaled to the way that we fundamentally understand the world. This makes it unpleasant to use and makes us feel like strangers; it makes us feel like we’re doing something wrong. It doesn’t, at its root, take into account our embodied, preconceptual understanding of the world.

To be fair, someone at Starbucks probably did see this problem. It’s also likely that whatever tools they had at their disposal were insufficient to advocate for the change that they wanted to see.

One of the things that we can do to bring a consideration of embodiment into our design decisions is to factor it into the tools we already use. When assembling user journeys, for example, we can integrate questions related to embodiment into the elements we’re already tracking:

  • What are the embodied metaphors at work in our user’s journey?
  • What might the experiences we design look and feel like in the physical world?
  • Does our digital experience break the rules of physical space?
  • If the digital experience does break the rules of physical space, are the new rules clear and comfortable for our users?

Sometimes, of course, we intend to create new, revolutionary, even out-of-body experiences. By keeping in mind the role our bodies play in structuring our thoughts, we can do a better job of empowering our users in those new spaces.

Magnitude

In addition to giving us the foundational building blocks of conceptual thinking, our bodies also provide us with a handy basic unit of measurement. This helps us evaluate concepts at the human scale with astounding speed and accuracy.

The further we stray from that human scale, however, the less astounding the results tend to be.

Magnitude, in its simplest definition, is the relative size of a measurement, object, or quantity. Our associative powers, stemming in large part from our innate ability to categorize and recognize wholes, are, in general, far superior to that of machines. But they’re also tied to the human scale. We make comparisons of novel situations based on our understanding of past situations. What this means is that the orders of magnitude for which we have the best heuristics, the best rules of thumb, exist on the human scale.

The concepts and relationships that we retain and which we use to guide our judgments stick because we can relate to them. For example, numbers that are base 10 or order of 10 magnitude form the base of our number system — because we have 10 fingers. It’s how we learn to count.

Also, consider the prevalence of dozens in our world: clocks, calendars — the English measurement system, kooky as it is. All of these give us easy, intuitive ways to split wholes into halves, thirds, and quarters. Which, you can imagine, if you’re not especially literate in math or you’re not very good at calculating sums with multiple decimal places is an easier way to split a recipe or reframe a measurement.

In order to get our heads around orders of magnitude beyond the familiar numbers of our fingers and toes, we translate. Now, in America, for some reason, we often make these translations in the North American unit of the football field.

This is an illustration of income inequality created by David Chandler. Here we have our football field. If we zoom in, we can see that this red line that runs the length of the football field, this is a measure of US household income distribution, based on census data. The height of the red line at any point is income, measured as a stack of $100 bills.

At the 50 yard line, we see the income of a median American family, about $55k. Half of Americans earn less than this; half of Americans earn more. As we zoom out, we can see a long, slow ramp from zero at the far left of our field to about $150k at the 90 yard line. That’s a stack of bills about 43 cm high.

One we pass the 99 yard line, the graph turns dramatically upward; that’s that vertical spike that you see. This is the region occupied by the infamous 1%. About 10 inches from the goal line — ten inches from the end of our football field, the graph hits the one million dollar mark. A one million dollar stack of $100 bills is about a meter high, about the height of a toddler.

As we zoom out further, we can see what happens in the last few inches of our football field. That tree is a giant sequoia. Our stack of $100 bills here is roughly a kilometer high. This is one billion dollars. As we zoom out even further still, we see Mt Everest come into view. And then further still and our graph stops at 50 billion dollars. Chandler stops here because this is the greatest estimated one year increase in net worth of Bill Gates.

The frames in the diagram increment in orders of magnitude: each square is ten times larger than the previous frame. Mathematically, we understand this, and we get the math. But without an illustration there’s no reference point tied to our embodied existence that makes us care — or even allows us to care. Without these kinds of aids, numbers this large don’t mean anything to us; we don’t have any way to put them in perspective. Not in a human scaled way.

Examples like this are made to take us by surprise, to outrage us. That’s their point. It turns out, however, that we do this kind of scaling of magnitude naturally all the time. So much so that we often don’t recognize that it’s happening.

Consider, for example, what size a stack of 6000 sheets of paper would look like. Could you fit 6000 sheets of paper in your book bag? If they were on the chair next to you, would the stack rise above your head? Would they fill your car?

If you’ve ever put paper in a copy machine, you’ll probably solve the question like this: You know a ream of paper is 500 sheets, which means 6000 sheets would be a dozen reams. Which would come up to about your waist. This gives you a pretty good approximation — because you were able to translate it into a human scale.

Digital systems, of course, don’t have this problem. Your phone can accurately calculate the volume of 6000 sheets of paper just as easily as it can calculate the volume of 6 million. But, once again, one of those numbers will mean something to us, whereas for the other we’ll need to translate again in order to put it in perspective.

As our uses for digital tools extends beyond simply calculating sums and volumes, as it undoubtedly already has, we’re beginning to see the impact of being exposed in personal ways to orders of magnitude beyond the human scale. Social media provides a salient and, given recent events on the global political scene, an urgent example.

The kinds of network platforms organizations like Facebook and Twitter build are perfectly tuned to the capabilities of digital network protocols. Hypertext, for instance, is simply a pointer from one node to another. It’s the backbone of the world wide web. Computers can keep track of these connections for all practical purpose infinitely.

When we look at how people build networks in traditional social groups, however, we can see that just like in processing numbers outside of the human scale, our method of network building is quite different.

When we’re children, for instance, we form particular kinds of connections with our families. When we go to school we learn, sometimes painfully, different ways of connecting with schoolmates and teachers. This happens again when we enter the workforce. And again when we marry into a new family or become parents, or become active in politics in our community.

At the human scale, when we increase the number of connections in a network we, by necessity, also increase the kinds of connections. Indeed, if we didn’t and, for instance, related to our coworkers the same way we relate to our siblings or our spouses we run into problems pretty quickly.

As it turns out, this is something that people in general are really good at. In the same way that we’re good at identifying wholes, and sorting things into categories. The ability to differentiate members of our social network is an innate capability.

It’s not, however, core to the operational language of digital networks. Mapping kinds of relationships onto digital networks is a painstaking process, often done manually. By people. What this means for all practical purposes is that the most popular social networks grow in ways that are good for machines — but which ultimately undermine the most human qualities of relationship building.

Everyone you add on Facebook is a “friend”; everything they post is “news.” That doesn’t mean that you forget who’s your family and who’s your coworker. It does mean, however, that digital social networks drain the nuance out of how we build and sustain relationships.

It’s not surprising then that, despite being “more connected,” we’ve seen an increase in individuals’ sense of isolation and in not being able to relate to other groups. What’s been termed in the media as “being in a bubble.” In some cases researchers have also demonstrated an increase in unhappiness while engaged in digital social networks.

Like the Starbucks example earlier, with enough effort we can learn how to navigate these spaces. As we’ve seen here, however, this is a case where technology, because of its fundamental differences, has, while giving the impression of extending connectivity, actually made making human connections more difficult. The net effect is that such social networks are functionally anti-social networks.

I don’t mention Facebook or Twitter merely to throw stones at a familiar scapegoat. This isn’t a trait of one particular platform. It’s a symptom of the difference between how digital networks grow and how human networks grow. It’s this which, left unchecked, leaves us as users feeling like we’re doing something wrong.

In order to bring these networks and others back to the human scale, we need to keep the differences in their foundational structures in mind as we make information design decisions. There are a few ways we can do this:

Pay Attention to Quantities
Did any of you have 450 friends before Facebook? That’s a scale problem. When we’re looking at quantities, look at tens, dozens, halves, and quarters. Even Dunbar’s number, which says that people form 150 meaningful relationships in their lives on average, is still divided into human-sized chunks that we can manage.

Look for Thresholds Where Tactics Change
As we saw with the income inequality and paper examples, we deal with different kinds of increases in magnitude in different ways. A network of nodes will increase exponentially, and predictably, but a network of relationships will hit thresholds of familiarity before it simply tops out at “how we treat strangers” (i.e. everyone else).

Test with Your Target Audience
In many case this is the only place where you’ll be able to see how people react. Users who have built expertise in a particular area may have no problem with related stats, or very large or very small numbers. But that same order of magnitude outside of their area of expertise can leave them not knowing which way to turn and, crucially, not knowing what to believe.

Belief

It’s difficult to engage the current state of information — and misinformation — without talking about belief. For all intents and purposes most if not all of us here have access to the available sum accumulation of human knowledge in our pockets.

And yet, as columnist Joe Keohane writes in the Boston Globe,

“It’s never been easier for people to be wrong, and at the same time feel more certain that they’re right.”

Sadly enough, this was written in 2010. And there hasn’t been much indication that the situation has improved since then.

To substantiate his claim, Keohane cites a body of psychological research around a concept known as “motivated reasoning.” The gist of this argument is that people tend to seek consistency and tend to interpret new information with an eye toward reinforcing preexisting views.

In effect, once a “fact” (whether true or not) is internalized, it is very difficult to dislodge. The Internet, then, appears to present us with the mother of all scale problems: If we have unfettered access to a world’s worth of information, both that which is grounded in verifiable evidence and that which is “speculative” at best, and we must choose between competing views, the easiest route is to go with what we already know — or what we think we know.

This is consistent with everything we’ve seen so far. Our relationships with space and categories are based on what we know from experience in our own bodies. Our relationships with magnitude are based on the physical forms around us that we experience in a tangible way.

As designers of information spaces — as information architects — this is, of all problems, our problem. How do we bridge the gap between the information we’re asked to structure and the influence that information has on our users’ understanding of the world?

Though this problem may feel as intractable as the Internet itself, we can collectively start tackling it more effectively by cultivating a better understanding of how beliefs are formed, and how the metaphors we use to understand how we perceive, assess, and decide help us achieve our goals — and where they don’t.

In his new book Liminal Thinking, author Dave Gray defines belief as “the story in your head that serves as a recipe or rule for action.” If we were to interpret this with the computational metaphor we might think, “oh, it’s like a program or an algorithm,” a series of steps a processor follows to reach a defined and repeatable conclusion. This, of course, isn’t just an oversimplification; it actually points us in the wrong direction.

Gray offers a model of how belief is formed, and what we can do to align beliefs to the goals and outcomes to which we aspire for our lives.

His model starts with reality, in the abstract. Gray writes that reality is essentially unknowable. There’s something “real” out there, but it’s always filtered through our experience which is, in turn, filtered through our perception, so any access we have to it is necessarily subjective.

Based on that experience, we form theories, judgments, and beliefs. These become the foundation for what we consider, ultimately, obvious. We can think of this as the motivation behind our motivated reasoning. The pre-existing views that we’re trying to be consistent with.

The key insight Gray brings to this model is the link between experience and our interpretation of that experience, or theories. This connection is made with attention. To highlight the importance of attention, Gray cites neuroscientist Manfred Zimmerman who estimates that our capacity for perceiving information is about 11 million bits per second — but that our conscious attention has a capacity of about 40 bits per second.

Attention is essentially the bottleneck to our understanding of the world. Attention is the scarce commodity. It’s also an indicator of where our beliefs lie. I’ve come to think that attention is the bellwether of belief.

A “wether,” incidentally, is the castrated ram that leads a flock of sheep. A bell is placed around the ram’s neck to announce the arrival of the flock before it comes into view, hence a “bell wether.” The term is commonly used to refer to a trend indicator, and, in our case, it’s a fitting metaphor for the role of attention. Whether we like it or not, the things we pay attention to shape our sense of what is obvious and, by extension, what we accept as true.

As we’ve seen in the media lately, there are some common but still very effective ways to shape attention. In the weeks leading up to the recent American Presidential Inauguration (2017), cognitive linguist George Lakoff directed his own attention the one particularly flagrant example of this manipulation of attention and assembled a “taxonomy of tweets” authored by the 45th President of the United States.

The first item in Lakoff’s taxonomy is preemptive framing, or being the first to frame an idea. The example Lakoff provides is the assertion that “the hacking of the Democratic National Convention (DNC) was the DNC’s fault and that the Democrats lost by a wide margin.” The fact, of course, is that it was a historically low margin, arguably because of the hacking done on both parties, but only released in the case of the Democrats.

Lakoff’s second entry is diversion, or diverting attention away from real issues. By way of example, he cites tweets that “diverted attention away from political conflicts of interest and Russian hacking and towards Meryl Streep’s speech at the Golden Globe awards.”

Lakoff also identifies deflection, or attacking the messenger and changing direction. The example he gives here is the President’s “attacking the media in an attempt to erode public trust and reframing credible stories as fake news.”

Lakoff further argues that it doesn’t matter whether you believe any of what is said. He writes that

“language that fits a world view activates that world view, strengthening it, while turning off the other world view and weakening it.”

The more the President’s views are discussed in the media, he continues, the more they are activated and the stronger they get, “both in the minds of hardcore conservatives and in the minds of moderate progressives.”

The worst part, according to Lakoff, is that all of this applies even if you’re attacking the President’s views. “It doesn’t matter if you’re promoting him or attacking him; you’re helping him.”

At first glance, this looks pretty damning: If even attempting to reframe an issue reinforces it as a thing when it shouldn’t be, how can we combat misinformation? And what does this have to do with us?

Luckily, there is more to attention than framing, diversion, and deflection. This is where people that structure information environments (i.e. information architects) come into play.

In his book Thinking Fast and Slow, psychologist Daniel Kahneman writes about an experiment that tests how individuals allocate attention in information rich situations. This experiment provides an illustration of importance of structured information to the formation of belief.

The scenario is this: A cab is involved in a hit and run accident. There are two cab companies that operate in this city: Green Cab Company and Blue Cab Company. 85% of the cabs are green; 15% of the cabs are blue. A witness identified the cab in the hit and run accident as blue. But the court determines that under the conditions that night, the witness was capable of correctly identifying the two colors only 80% of the time.

The question for Kahneman’s test participants is this: “What’s the probability that the cab was blue?” Mathematically, there is a correct answer. It’s a simple Bayesian analysis: there’s a 41% probability that the cab was blue.

The most common answer the participants gave, however, was that there is an 80% probability that the cab was blue. The base rate (the proportion of cabs in the city) is either overwhelmingly ignored, or undervalued.

Kahneman then reframed the information about the base rate, leaving all the details of the story the same, but listing the base rate as accident percentages. With this information, participants came much closer to the mathematical probability: a 41% chance that the cab was blue.

Kahneman argues that the reason for this is that in the first instance, where we are only told about the number of the cabs in the city, there is no way for an observer to fit that fact into the narrative of a hit and run. The information was seen, but it was ignored.

In the second telling, the information was attended to because it fits a causal story. Though each story is mathematically identical, psychologically, they’re quite different. There are only two pieces of data here: the proportion of cabs and the accuracy of the observer. Until both pieces come together in a cohesive narrative, however, we tend to cling to the simpler answer — the 80% probability that the cab was blue.

In contrast to the identical calculation a machine makes in both instances, our limited attention can effectively blind us to the complexity inherent in the world. The differences in these results is what Lakoff describes as direct vs. systemic causation.

Direct causation explains and deals with a problem via direct action. Lakoff writes that this is easy for us to pick out because it’s directly represented in grammar. Lakoff further notes that this is true for the grammars of all languages; they all have direct causation built in.

Systemic causation recognizes that many problems arise from the systems they are in and must be dealt with systematically. This is more difficult for us to pick out because it’s not represented in grammar — not in any grammar, as Lakoff notes.

As we can see from the cab example, a computer has no problem with either scenario. They’re mathematically identical. A human reading of the problem, however, is driven by attention. Attention, in turn, is driven less by what the facts are, than by how they fit together in a cohesive narrative.

The current state of our networked information systems are largely in opposition to this characteristic of attention. Just like we saw when we looked at social networks, the links that send us from place to place are generic. They’re pointers. And they’re almost always empty of the connective tissue of narrative that our brains need in order to fit facts into our subjective realities and, crucially, to evaluate them as a result of systemic causation.

Many of the social issues we’re seeing now are at least in part a result of the way our information spaces are designed. Equipped with a better sense of where our information solutions scale to human needs and where they don’t, there are a few things we can do as designers of these information spaces to help our users better negotiate these disjointed and often overwhelming environments:

Remember that Attention, Belief, and Action Are Linked
To talk about the heuristics we use for action, or beliefs, as strictly true of false is, for humans, to force them into a computational mapping and often into direct causation. Beliefs either lead to actions that meet a desired outcome, or they don’t. When actions taken because of belief don’t lead to desired outcomes there’s a crisis.

Watch for Cognitive Bias Exploitation
These are things like framing, diversion, and deflection. This also includes biases like loss aversion and anchoring. These kinds of biases, because of recent events, have been getting more attention in news media. Keep in mind, however, that these first order biases, while easier to pick out, are not any less effective for our having recognized them.

Be Aware of Which Conceptual Frames Are Being Activated
These are harder to spot. Sometimes we activate a frame by attacking it. As Lakoff and Kahneman note, the key is in being aware of how we’re directing attention, and what information we’re making available to the decision-making process. In our role as IAs, this can be as simple as paying attention to the systems created by the labels we use and how we categorize concepts.

As designers, we should also consider how we’re framing our users. Are we making assumptions about them based on metaphors involving processors, bandwidth, RAM, and hard drives? If so, are these frames helpful? Or are we just taking them for granted? As designers of information spaces it’s up to us to ensure that the models we use meet our and our users’ goals.

Designing for the Human Scale

So. I’m sure all of this feels like a pretty huge task. Probably much more than you thought you were signing up for as an information architect.

Our work as IAs, however, is essentially the work of modeling. Models allow us to interact with degrees of scale well beyond our natural abilities. Indeed, until we’re able to translate concepts into scales we understand, acting on them in a reasoned way is virtually impossible. As statisticians Norman Draper and George Box have famously put it, “All models are wrong, but some are useful.”

As professionals designing information environments for human consumption, it’s up to us to ensure that the models we employ remain useful, and that when they’re not we suggest something new. When a model is as ingrained into our theories, judgments, and beliefs about what is obvious as the computational model of the mind, it can be particularly hard to shake.

For this reason, I’ve shown you some examples today that range from the relatively concrete to the fairly abstract. My goal has been to help you recognize places where the computational metaphor of the mind favors the aptitudes of the the digital systems we’ve built over the aptitudes of our users, and in so doing fails to scale appropriately to the natural, innate, and powerful abilities we all possess.

I’ve also given you some tips on how to address each of these areas. From keeping in mind that our bodies form the core of how we experience, conceptualize, and communicate about our world; to realizing that we are extremely adept at situating quantities and concepts relative to what we already know; and, finally, to understanding that true and false mean very different things for digital and human agents.

We’ll likely always have some kind of metaphor to explain how our brains actually work. The way we conceptualize and design information spaces for humans must take into account both that metaphor’s insights and its limitations if we’re going to be effective for our clients and our users, and, ultimately, responsible to each other.

Thank you.

This article was originally presented as the opening keynote at World IA Day Zürich and was first published at andyfitzgerald.org, where I blog about how cognitive science, language, and meaning-making fit in with the practice of user experience architecture & design.

--

--

Andy Fitzgerald

Independent UX Architecture & Design Consultant, amateur cognitive linguist, occasional marathoner. www.andyfitzgeraldconsulting.com