Consilient Design
Published in

Consilient Design

Habits versus Mental Models

Mental models become necessary when designs fail us.

Although it may seem like our conscious self is always in control, much of our behavior is automatic. When we repeat actions frequently in similar situations they come to be invoked independently of conscious thought. These automatic behaviors or habits are an important feature of our cognitive machinery they because consume less mental (and metabolic) energy than deliberate conscious thought.

This mental feature explains why we sometimes may drive a familiar route and then are subsequently unable to recall details of the trip. It also explains how musical artists can play guitar and sing at the same time. It explains why, at least some of us, can walk and chew gum simultaneously. It is a superpower.

In contrast to habits, we also behave consciously, deliberatively and intentionally. This mode gives us the power to solve problems and think creatively. It also gives us the power to modify or break habits. The conscious mode of thought is also a superpower.

The Nobel Prize winning psychologist Daniel Kahneman and his colleague David Tversky as well as others made huge advances in cognitive psychology through their research and theorizing about the unconscious automatic and conscious deliberative modes of cognition. Although he didn’t coin the terms, Kahneman popularized the terms “System 1” as the standard way of referring to fast, low-cost, automatic behavior and decision making and “System 2” for the slow, effortful, deliberate behavior and decision making in his book “Thinking Fast and Slow”.

When we operate devices, we use both System 1 and System 2. Yet it may be that that when mental models are accessed we are mostly talking about system 2 behavior.

I’ll illustrate why with an example that I discovered in my backyard.

An overly complicated system for supplying water to a garden

To connect a water faucet to both a garden hose and a DIY drip irrigation timer valve, I attached a hose splitter to the faucet. One side of the splitter goes to the hose and the other to the timer valve for the system that waters my flowers. Figure 1 is a conceptual model of this system.

The splitter has valves for both sides. For this system to work, the faucet valve and the splitter valve to the drip system both need to be left open and the splitter valve for the hose needs to be closed at until the hose is used.

An issue with this system is that my fast , unthinking System 1 causes me to make mistakes with this simple arrangement. When I want to supply water to the hose, my first impulse (a System 1 impulse) is to turn the faucet valve. I then notice that it is already open. The mild surprise causes System 2 to kick in and I consult my mental model of the two valves on the splitter. So I open valve 1.

When I finish using the hose, I need to turn off the water. Again my first, and often only, impulse is to shut off the faucet valve. If I do so this, it shuts off the hose but also the water supply to my drip system. If I don’t catch my error my flowers soon wither in the 100 degree Texas heat.

The faucet looks like any other external water faucet that I turn on or off without much thought. Because I have used conventional faucet spigots over untold years this act of getting water to flow has become habitual. It is under the control of System 1. The valves on the splitter are less familiar and less salient. They require the operation of System 2 to suppress my System 1 tendency to use the faucet valve. I need to consciously consult my mental model of the system to achieve the desired results.

The optimal solution this problem is simple — split the water supply before a hose valve and the irrigation timer so that each operates independently. This would leave the faucet valve for the hose and the timer would work on its own supply. Unfortunately in my case, that would require a plumber. Once again cost defeats usability.

I have two other examples that show how poor design can cause errors by ignoring habit and how procedural knowledge is bound up in mental models.

Figure 2 shows the gas gauge in a rental van. It has two design flaws. First, the orientation of Full and Empty markers is flipped from the normal convention of Empty on the left and Full on the right. Second, The needle covers the bottom of the E in Empty making it appear to read “Full”. It is easy for a hapless driver to hop in, glance at the gauge and surmise that the tank is full when in fact it is empty. Repeated experiences with gas gauges produces the decision habit:

if the needle points to the right, I’m good to go.

If the driver doesn’t engage in the System 2 behavior of carefully studying the instrument panel before driving off, he or she will get a rude surprise when the gas soon runs out.

Figure 2, Gas gauge in a rental van,

Figure 3 illustrates yet another case of a poor design in which habits conflict with a user’s expectations. Many elevator controls (and controls in general), now integrate labels with buttons, using a principle known as stimulus-response contiguity. This convention relies on an evolutionarily fundamental aspect of behavior called stimulus substitution. The stimulus (the symbol for the number 2) comes to stand for the behavior (tell the machine to take me to the second floor):

locate the control representing your goal.
activate that control.

Stimulus substitution is a very natural way to interact with the world. It is a bit unnatural to direct actions to a stimulus that is separated from the goal. This is why babies learn to eat with their fingers before they learn how to use cutlery. Indirect action requires the additional cognitive machinery of System 2 until a habit is formed and System 1 can bypass a bit of mental effort.

This control panel is in an elevator in the medical clinic that I visit once a year for my physical. To get to the second floor, I invariably press the high contrast, round label “2” to the left of the lower contrast round actual buttons. The round label doesn’t depress, I dope-slap myself to activate System 2 and then press the button to the right. When habit fails, I must fall back on a mental model rules:

locate the label representing your goal.
locate the control nearest to the label.
activate that control.

If I took the elevator daily, I might learn to press the silver button first. However without repetition, my more primitive stimulus-substitution behavior continues to dominate so once a year I repeat my mistake.

These and other examples lead me to think of mental models as stopgap measures that are used when intuition and habit fail us. You don’t need much of a mental model for controls that follow pervasive conventions and map goals directly to stimuli. You don’t need to build much of a mental model when there is a high degree of stimulus-response contiguity.

These are admittedly trivial examples. Of course absolute simplicity is rarely achievable and designers will continue to need to anticipate the formation of user mental models and design to accommodate them. On the other hand, how often do we as designers intentionally strive for the development of habits to lessen the need for users to build and rely on mental models of how things work?


Kahneman, Daniel. (2011) Thinking, Fast and Slow, Farrar, Straus and Giroux.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jim Lentz

Jim Lentz

UX research and design psychologist with interests in the relationship between humans and society, decision making, creativity and philosophy.