Cognitive biases are not the same as mental models

Chris Atherton
Netlife
Published in
5 min readNov 1, 2017

I’m teaching on an interaction design course, and thought it would be good to suggest some reading on the importance of mental models, and how we can discover and accommodate these in the design process. I was not prepared for how many articles there are out there which are ostensibly about mental models, but which seem to describe them as interchangeable with cognitive biases. I think these are different beasts; here’s why.

My favourite example of a mental model comes from working at Skype.

We ran a couple of usability testing days with people who were deliberately a bit older and less experienced in using technology than most of the people we typically tested with. We asked them to visit the Skype website, and download the software on the desktop computer.

Then we asked them, Now what?

And several of them said, Now I use Skype on my phone.

We probed a bit more, and it made perfect sense: their mental model of Skype was software that you download to make phonecalls with. (Perhaps the water was also muddied by the existence of smartphone apps, the understanding and downloading of which comes with its own set of challenges, mental models-wise.)

Mental models kind of have to exist.

We can’t absorb totally new concepts without leaning on established ones, to help us make sense of the new information:

Skype = computer + phone

It’s like Uber for [thing]

“Hey Siri …”

(Sidebar: I don’t mean to imply that all mental models exist in such simple terms. The more you know about something, the more sophisticated your model of how it works. If you’re interested, cognitive load theory has some things to say about our information-processing capacity, depending on whether we are novices or experts.)

Suggestion 1: Mental models are recognisable to the holder, and may even be explicitly described by them

Mental models are not always something we know we have; they may be held tacitly. We can infer the existence of someone else’s mental model by observing their behaviour, for example during usability testing. But mental models can also be interrogated or made explicit — for example by asking people questions, and getting them to perform sorting tasks to establish information groupings and hierarchy.

Sometimes, mental models are consciously accessible. People may volunteer their own mental models of how stuff works: a few participants at Skype said things like “Well, it’s like having a telephone on your computer, isn’t it”.

Mental models are explicitly experience-dependent, an artifact of the user’s knowledge and life experiences to date. Meeting a description of your own mental model should feel, in a way, like recognition.

Suggestion 2: Cognitive biases are distinct from mental models in that they cannot be perceived by the holder — at best, they can be theoretically understood as existing.

We think of ourselves as rational beings with full control over our actions and an accurate picture of events around us. This, sadly, is bullshit. Cognitive biases are systematic (and reproducible) failures in information processing. There are probably good reasons that we make these errors: maybe they reflect useful cognitive shortcuts that make evolutionary sense, saving us from the experiential equivalent of having to derive a statistical test from first principles every time we want to analyse some data. Unfortunately, an undesirable side-effect of such shortcuts is that we occasionally make stupid choices and do X thing instead of Y thing — and then we either don’t notice that we did it, or we justify our decision on fairly flimsy grounds.

As with mental models, we can infer the existence of cognitive biases based on observing and measuring how people behave. But unlike mental models, I would argue that a fundamental characteristic of a cognitive bias is that when confronted with a description of one’s own cognitive bias, the holder does not recognise it, but actively rejects it. Nobody likes the implication that that their behaviour or cognition might be in some way skewed or inaccurate.

The good news is that awareness of cognitive biases, for example in hiring practices, makes it possible to design better processes that eliminate possible sources of bias. But even knowing about bias does not stop us from exhibiting it.

Suggestion 3: Confusion between mental models and cognitive biases might occur because some cognitive biases are experience-dependent

You can’t really have a mental model that isn’t based on prior experience. And as our knowledge of a thing develops, so do our mental models about it. We can acquire that experience vicariously: for instance, if I’ve never used Trello but am a big fan of agile software development, then if you tell me that Trello is like an online agile post-it wall, I’m probably going to get the idea. I mean, the whole “It’s like Uber but for X” meme is based on this. But generally, mental models seem like an artifact of our experiences, however skewed.

And maybe some cognitive biases originate the same way, through our unreliable perceptions of our own experiences. I’m sure it’s possible to hold mental models about the world that are (ill-)informed by biased perceptions. Maybe this is where confusion and imprecise use of these terms occurs.

But what if some cognitive biases are entirely experience-invariant? That is, what if some biases only occur because of our historically-useful-but-occasionally-unhelpful-in-the-modern-world brain wiring? The idea has intuitive appeal, though that’s about as good a scientific red flag as I can think of (see Chip and Dan Heath’s Made To Stick for some good reasons why ‘sticky’ is easy to misinterpret as ‘true’).

And even if there are two distinct types of cognitive bias, how would we know? How would we tell them apart, scientifically and objectively? I chatted with Even Westvang about this, and he was like, How do I even google for research in this area without finding exactly what I went looking for, with all the problematic bias that implies? I mean.

So to sum up:

Mental models of how things work are inherently experience-dependent (though experiences can be shared with others), and are recognisable, if not always consciously accessible.

Cognitive biases may be experience-dependent, though some may be purely hard-wired. Regardless, cognitively-biased behaviour isn’t something we seem keen to acknowledge about ourselves.

I hope this makes sense. I’m still — science is still — figuring out how it all slots together. But that’s what I’ve got so far. Feel free to discuss in the comments.

Thanks to Adam Houser and Even W for input and discussion. Thanks also to everyone who replied to my Twitter thread, where I worked out some of these ideas a bit.

--

--