People make unpredictable decisions... But we should design for their best interests anyway.

Kaila Snyder (Manca)
NYC Design
Published in
13 min readOct 7, 2018

Note: This piece is based on a talk Jay McCormick and I gave at the 2018 IA (Information Architecture) Summit in Chicago. Please skip to the bottom to view the slides or check them out on Linkedin.

Understanding relativity, salience and heuristics will help you understand choice architecture.

The practice of influencing choice by changing the manner in which options are presented is called “choice architecture,” a term coined by Thaler and Sunstein (2008).

When we (humans) are presented with a problem, the way in which that problem is presented influences the choice they make — therefore, as user experience professionals, the way in which we design has a direct impact on the behavior of our users and consumers.

Details such as the manner in which attributes are described, the number of choices presented, and the presence of a “default,” can all have a subtle influence over behavior and decision-making — not to mention the layout, content, context, and range of choices offered.

We can use relativity, salience, and heuristics as a frame of reference to better understand behavior and choice architecture.

Context shapes behavior

The classical school of thought says that people are consistent. The idea is this: if you have a bubbly personality, or a penchant for thrill, that will come through in your behavior, no matter whether you are in the office or in the pub with some buddies.

This theory is the opposite of what choice architects believe: they say that behavior is dependent on context, and that people are systematically biased. Choice architects believe that your context directly shapes your behavior — so you may feel and act differently in the office than you do at the pub. There are three relevant components to creating context: relativity, heuristics, and salience.

Relativity

Ariely (a BE researcher) asked a class of students “ How much would you pay me to read poetry? Would you pay me $10?” He then asked another class of students “How much would I have to pay you to listen to me read poetry? Would you accept $10?”

The first class said they would pay him $10. The second class said they would accept $10 in payment. It was the exact same experience he was offering to each class — reading poetry — but the way in which he framed that experience was what indicated its value, convincing one class that it was worth paying for, and the other that they needed to be paid to sit through it.

When we say that choices are relative, this is what we mean. Choices are relative to the way they are framed.

Another example: two similar European countries have extremely different ratios of citizens that choose to be organ donors. Germany uses an opt-in system for organ donation, meaning that they require people to check a box saying “Yes, I will be a donor;” and 12% of German citizens are donors. Austria requires people to “opt-out” of organ donation. If you choose not to check the box that says “opt-out” than you are automatically a donor. Due to this policy, 99% of Austrians are donors.

How could something as trivial as a checkbox make such a large statistical impact?

Presence of a default

Well, the presence of a default has an inordinate amount of bearing on user decision-making, because it is the choice of lowest effort, essentially making the decision for the user. This is one of the four frames that have a significant impact on relativity when we talk about user behavior.

Besides the presence of a default, the other frames that impact relativity are: whether something is free, whether you provide a weak alternative, and whether there is social/individual pressure.

We will break each of these down individually, to talk about how they over-influence user behavior.

We love free stuff

Let’s talk about the “free” frame first: people generally over-value free items. Think about the last time you were offered something free-of-charge. Did you accept it? Did you actually need or want the item offered?

Think about the last “National Donut Day” where thousands of people wait in line at IHOPs across the country… just to get a single free pancake. Would they do this on any other day?

In Ariely’s book, Predictably Irrational: the Hidden Forces that Shape our Decisions, he says: “Most transactions have an upside and a downside, but when something is FREE! we forget the downside. FREE! gives us such an emotional charge that we perceive what is being offered as immensely more valuable than it really is.”

He goes on to describe a study he did, with 398 MIT students, intended to measure people’s reaction to two different products: Hershey’s candies and Lindt truffles. The Lindt truffles are significantly more expensive than Hershey’s kisses under normal circumstances. However, when the students were asked to select between a free Hershey’s kiss or a steeply discounted Lindt truffle (the total “savings” being much more for the truffle), most students opted for the free Hershey’s kiss, although it was not objectively the best deal. Compared to “not free” (Lindt truffle), the Hershey’s kiss seemed to be the better option.

My cheaper breadmaker must be better than your expensive breadmaker

Another frame that exerts a strong pull over our objectivity is the “weak alternative.” The best way to understand this frame is through a real-world example: In the 1990’s Williams Sonoma came out with an “innovative” brand-new product, the bread machine, priced at a hefty $275. Sales were virtually nonexistent. Clearly, customers didn’t know what to make of the bread machine. They had no context for comparison, they apparently didn’t know 1) why they would want to make bread or 2) why they would pay $275 to make their own bread. However, instead of completely scrapping the product, what do you think Williams Sonoma decided they would do?

They introduced ANOTHER bread machine, priced significantly higher than the first!

And what happened ? The original (now lower-priced bread machine) started selling like crazy. This decoy effect as it’s referred to in the consumer behavior field, caused people to anchor their perceptions of one model of bread machine on the other. Even when they didn’t know much about bread machines, they were attracted by the prospect of a good deal. Since they were choosing between two bread machines, they thought “I can buy this $275 bread machine, and it’s almost equal to the $415 bread machine, therefore I must be getting a deal.” The more expensive, weaker alternative bread machine showed them the value of the original bread machine.

Attorneys and volunteer work

The last frame to discuss is the frame of social/individual pressure. This frame can also be explained through an example.

In a study, attorneys were asked “Will you offer discount prices to senior citizens in the community?” and the vast majority said “no.” When they were asked “Will you donate your time to assist senior citizens in the community?” the vast majority said yes (even though this would make them less money overall). Why? Because they didn’t like the perception that they were lowering their value. Although they were willing to help out seniors, they preferred that they do it under the umbrella of a donation than one that spoke to the value of their work.

So how can we, as researchers, find out people’s preferences if they are shaped by our asking that very question? How do we design, knowing that choices are relative?

Since we know that user preferences are variable, as UX designers, we should keep in mind that the way we present information (and when you present that information) will have a significant impact on user decision-making.

When we design, it pays to keep these four frames of relativity in mind. Example: If we are designing for Dropbox, and the company offers a “free” plan alongside two “paid” alternatives, we should be aware that users are going to be highly likely to opt for the free plan above all else: then think about how to design for optimal sales. Or, if we want users to successfully sign up for our service, perhaps it makes sense to allow for the “presence of a default,” and autofill as much information as possible.

Designing for relativity

With this knowledge in mind, when I think about a new design problem, I take the following steps:

  1. Identify the challenges and “anchors.”
No, not this kind of anchor.

(Then do what you can to ease them.)

2. Provide other reference points.

What else might the user want to know? What can I use to show the value of my “breadmaker?”

3. Highlight points of contrast.

Why is my more expensive Dropbox plan worth it?

Heuristics

The second concept that creates context in our framework was popularized by Jakob Neilson and written about in Thinking Fast and Slow, by Daniel Kahneman.

Heuristics are essentially shortcuts. They are frames of reference that humans use to make quick and efficient decisions.

We make so many decisions in a day (“What will I wear?” “What should I eat?” “What route should I drive to work?” and on and on…), that our brain needs to find faster, more efficient ways to come to conclusions. Enter heuristics.

Often times, heuristics are just shortcuts to an easier question. For instance, instead of interviewing top applicants for your job and thinking “Who has the best set of qualifications?” we might substitute the question “Who do I like the best?” With this example and others, this can be unconscious: we may not even realize that we are performing this substitution.

Case Study at Google

One example I like of how to use and design for human’s propensity for heuristics was some research done in the data-driven cafeteria at Google.

Google noticed that employees were consuming a LOT of calories in unhealthy (free!) food, at their cafeteria, and subsequently that many employees reported weight gain and/or a desire to eat more healthily. While wondering how to help employees achieve this goal, Google found the following heuristics around food consumption, and used them to change something about their cafeteria environment:

1. Location: People tend to fill up on the first thing they see when they enter the cafeteria.

So: Google swapped out the dessert table (one of the closest tables to the door) with the salad bar.

2. Size: People tend to finish what is on their plate, despite the size of the plate. People tend not to want to take more than one plate, or ‘go back for seconds.’

So: Google replaced their large plates with portion-sized plates.

3. Color: People don’t always read nutritional information, so they are not always aware of the calories, and overall health impact of what they are eating. However, people are likely to understand the implicit meaning of color scales (green, yellow, red) because it is familiar (we see it on traffic signs, in intersections, etc.).

So: Google labelled all of the foods in the cafeteria according to an easy, three-color scale. Green=healthy, Yellow=moderate, Red=eat in small quantities.

4. Ease of Access: People are more likely to eat (more) food if it is easily accessed.

So: Google took candy, and other “red” foods, and re-homed them. Instead of putting them in gravity dispensers, they put them in tight-lidded jars where people had to reach in to get the candy vs. pull a lever and have it drop into their bowls.

All of these changes made a HUGE impact.

By understanding heuristics in decision making, we can use design to make complex problems (like this one) more simple.

Designing with heuristics in mind

  1. Understanding what heuristics people are using

What shortcuts have they taken to reach their decisions? Understanding these shortcuts helps us predict their decisions before they make them.

2. How can I think for my user so they don’t have to?

Predicting the decisions that users are likely to make allows us to do work to either support or challenge that decision.

3. Make it clear [and scannable].

Deliver your solution in a consumable way.

Salience

Salience is the last element of this context system, and simply refers to what is obvious or what we pay attention to. When we are given a slew of information, what pieces do we remember, pay attention to, and hang on to?

Other ways I like to describe salience is as “the pattern that pops out,” or “WYSIATI: What you see is all there is.”

Here’s an example of how humans mistakenly jump to conclusions by unconsciously prioritizing salient information:

If you are walking around in downtown NYC, feeling a little hungry, and you see tons of people lined up outside a pizza restaurant, than you might assume: that pizza is good. Everyone is getting pizza. In fact, not everyone is getting pizza, you just only see the people who are.

A better example: after natural disasters, like the September 11 terrorist attack, people think: “The Red Cross needs blood donations,” and turn out in record numbers to donate blood (far more than they typically do during blood drives). Due to this turnout, often times the Red Cross has to discard some of that blood: it’s actually too much. Red blood cells have a shelf-life of 42 days, and they can’t possibly do enough transfusions to merit the amount donated. During the days after Sept 11, people donated 3–5X more than normal, but blood needs were minimal — meaning that the salient information to people was “there was a disaster, people are injured, they probably need blood transfusions,” ignoring the actual data and/or blood requirements.

This is referred to as pattern mismatching.

In design, we need to spend time thinking about the problems and solutions we present, and how salience plays into these decisions. There was a study done where people were handed two dictionaries, one tattered and the other brand-new, and asked what their value was. Those in the study overwhelmingly valued the brand-new dictionary more than the tattered dictionary. However, when those same people were given additional information about the dictionaries: that the tattered one had 20,000 entries and the brand-new one only had 5,000, their answers changed. The salient information when evaluating dictionary value is clearly “how much information it holds/how many entries it has,” but without that information people could not make an informed decision. This is where — in the design of your products and in your usability sessions — if, how, and when you present salient information can vary your results.

Ethics

This topic brings up some more complicated moral questions about how we inform users. I could write another article entirely about that, however; assuming we want users to have as much salient information as possible, we can:

Designing for salience of information

1. Facilitate easy comparison and provide other reference points.

What information about this dictionary should I be evaluating? “The average number of entries in a dictionary is 1,500.”

2. Give a sense of control when possible.

Crosswalks don’t always do anything to signal lights to change. However, as a user of the crosswalk, even if i am not actually getting across the street faster, being able to press the button and feel in control makes me feel involved in the process.

3. Show what is salient [and not more]

4. Highlight points of contrast

This dictionary has 2000 entries, not 1000!

Summary & Conclusion

We have shown that decision-making is a highly malleable process, and that user’s preferences are not as firm as we may have believed. Decision outcomes are actually largely contingent on the environment or context in which those decisions are made.

  • People’s choices are relative.
  • People use heuristics to make those choices.
  • By understanding heuristics, we can guess at and influence choice.
  • If you know what the best choice is, highlighting what is salient helps people make that choice.

Design has a critical impact on decision-making. Decisions lead to actions, and user actions drive everything.

As designers, we’ve gotten really good at making processes efficient, perhaps without considering what other information users need to complete that process. We may have made it easy to buy, but is it easy to choose?

So, what do we do with this information? Let’s break it down.

  • User goal: get best decision outcome, ideally with the least amount of effort.
  • Inherent conflict: typically, the best outcome comes from putting in more effort, not less.
  • Designer shortcut: try to do the thinking for your user, whenever possible. This promotes more simplistic and positive experiences as users accomplish their goals, and allows them to use these shortcuts to their advantage rather than to their detriment.
  • Since we know that choices are relative to context, than be mindful about setting that context. Try to design to support what users SHOULD choose, not what they COULD choose (There is a ton of responsibility involved here!)

Translating this to Design

Here’s an example of providing users with more and better information, or “choice architecting” your design.

Here’s our starting point.

Original “Not Architected” Design

How do we make it better? Let’s remember some of the core principles we discussed above.

Relativity

  1. Identify the challenges and “anchors.” (Then do what you can to ease them.)

2. Provide other reference points.

3. Highlight points of contrast.

Heuristics

  1. Understanding what heuristics people are using

2. How can I think for my user so they don’t have to?

3. Make it clear [and scannable].

Salience

  1. Facilitate easy comparison and provide other reference points.

2. Give a sense of control when possible.

3. Show what is salient [and not more]

4. Again — highlight points of contrast

Here’s how — by going through the steps we just discussed:

Here’s our finished and “choice architected” solution:

Information Architected Design

Slides?

This talk is also summarized on my website here. The presentation slides are also hosted on Linkedin.

Thanks!

Thanks for reading! I’d love to hear your thoughts about this framework in the comments. Or, if you’d rather reach out to me and start a conversation, you can send me a message or check me out on Linkedin.

--

--

Kaila Snyder (Manca)
NYC Design

Product design leader & creator. I value people and solving tricky, meaningful problems. Huge fan of the great-out-of-doors. 🌲