The human mind is a wonderfully complex thinking machine. We’ve developed written language, built skyscrapers, and discovered quantum physics through our collective ability to plan and reason. But despite our intellect and like all earthly creatures, the mental circuitry of our biological ancestors had been optimized by evolution for a world where timeliness was more valuable than accuracy.
Optimizations often present tradeoffs. In many modern decision-making contexts, humans are predictably irrational. Many studies have empirically demonstrated these types of systematic deviations, which are also known as cognitive biases or mental fallacies. Neuroscience researchers have shown that cognitive bias is deeply rooted in the structure of our brains; unlikely to go away for a very, very long time. So whether you consider it a bug or a feature, cognitive bias is part of what makes us human.
The next two sections provide some background on cognitive biases. If you’re already familiar or want to get straight to it, feel free to skip ahead.
Some background on your brain
Intellectual movements that led to the Enlightenment fetishized the human mind as a perfectly rational entity, capable of reasoning the very existence of God himself. The rise of classical economics further cemented the idea that human decision-making is simply the output of a neat set of mathematical equations. It wasn’t until the 1970s, when the dual fields of cognitive science and behavioral economics were developed, that bounding boxes of human rationality began to be drawn and understood.
In his seminal book Thinking Fast and Slow, Daniel Kahneman, a major figure in behavioral economics, took over a decades worth of research and outlined the two modalities of thought humans use to form judgements and make decisions.
System 1 governs the majority of our natural and automatic interactions with the world. We rely on System 1 to recognize a friend’s face among a crowd or detect hostility in someone’s voice. When a thought pops up in your head out of nowhere, or you know the answer to something from memory, System 1 is at work.
System 2 is the slower, more deliberate side; it’s active when we plan our day, do long division, or count the number of times the letter ‘a’ appears in this sentence. System 2 operations are often associated with the subjective experience of agency, choice, and concentration, and we use it to explicitly decide what to think about and what to do – it’s what we identify with when we think of our “self”.
The slower mode of thinking done by System 2 isn’t so easy and free; there are cognitive costs associated. Each time it is called upon, the brain draws from a scarce and limited pool of mental energy which is later refilled during rest.
The phrase “pay attention” is fitting – our mind allocates a finite amount of attention to activities throughout the day. If you attempt to overspend your budget, you will likely underperform in some aspect. Most people cannot calculate the product of 43 x 32 while simultaneously parallel parking a car.
How cognitive biases arise
Much like your heartbeat, System 1 is always on. It effortlessly originates impressions and feelings that serve as suggestions for System 2 to accept or reject. As we navigate our normal lives and work, we allow ourselves to be guided by these mental shortcuts so we don’t have to formally reason through every single action or decision throughout the day.
Most of the time this mechanism of thought is accurate and efficient – this is what is commonly referred to as instinct or pattern recognition.
Cognitive biases arise when a mental shortcut generates an incomplete or inaccurate judgement.
Some categories of cognitive bias are easily avoided if System 2 is given ample time to reason through the situation. But because attention is short and System 1 is the default operation, errors of intuitive thought are quite difficult to prevent entirely. Further, in some cases misjudgments simply cannot be avoided because System 2 may be missing necessary information or lacks the raw processing power required. In any case, constantly questioning your own thinking during everyday life would be terribly tedious and inefficient.
Why is this important for people who make software?
Cognitive biases can be found in almost all contexts of human life, but we are especially susceptible when interacting with products and services on the internet. There is a dearth of attention and a massive surplus of distraction when someone spends time using their smartphone or computer. Your website or app likely competes with hundreds if not thousands of other things for user mindshare – you’re going to need all the help you can get.
I’ve codified several of the most common human cognitive biases so people who create products can design and build with them in mind. You can use them to create a more sticky and engaging product experience. I’ve tried to provide useful examples as well as add color to pieces of oft-cited product wisdom like offering free trials, putting media logos on your landing page, or creating ‘aha’ moments in user onboarding.
Common cognitive biases to keep in mind when building products
Anthropomorphism: Humans tend to ascribe human-like traits and intentions to non-human entities (e.g. animals, objects, abstract concepts).
It’s no coincidence that the term user experience design materialized concurrently with the mass adoption of the smartphone. Our mobile devices are incredibly anthropomorphic — we feel a great sense of attachment to them. We hold them in our hands, keep them close to our bodies, sleep next to them, and reach for them first thing in the morning.
We like things to be like us. We naturally feel more connected when products and services demonstrate a human element, and there is a feeling of genuine coherency when the user experience is consistent with a company’s brand and mission. Like humans, opinionated products that have a clear point-of-view are more interesting than products that look and act like everybody else.
Come up with a set of principles that help guide product and design decisions. A good heuristic for whether you’ve come up with an opinionated principle for your product is whether a reasonable person is able to disagree with it.
Negativity bias: Humans have greater recall of unpleasant memories compared to positive ones.
A psychology professor named John Gottman conducted a now-famous study which found that the strongest predictor of success in long-term relationships is the ratio of positive to negative moments that a couple experiences during time spent together.
He defined a moment as the few seconds that it takes our brain to store an experience in memory; moments have different intensities and can be labeled positive, neutral, or negative. A 5:1 ratio or greater means the couple has a good shot at staying together for the long-run. The Gottman Ratio has been shown to predict everything from brand affinity to workplace satisfaction to divorce with remarkable accuracy.
Think of your user experience as a collection of moments. Is your product delivering at least five positive moments for every one that a user might find negative or annoying? Negative moments might include slow load times, bugs, an unintuitive flow, unwanted ads, or crappy content. Pay special attention to the Gottman Ratio during your product’s onboarding experience. If it’s too low early in the user journey, you will quickly lose people to churn.
It’s a good idea to consider the 5:1 rule-of-thumb when discussing product tradeoffs. It’s often easy for teams to rationalize prioritizing something someone cares about internally over end-user satisfaction. Many times a creative solution that doesn’t compromise either one can be found. The most sustainable system is one where the company’s interests are aligned with those of its users.
Loss aversion: Losses loom larger than gains in our minds, all things being equal.
In other words, we may like to win, but we hate to lose. Studies in behavioral economics and decision theory suggest that losses are 1.5x to 2.5x as psychologically powerful as gains. If you propose a $100 coin toss, most people won’t agree to play until the payout for winning is between $150–$200. A perfectly rational computer would take the bet every time if the payout was $101. Humans are willing to leave money on the table to avoid the possibility of losing, which demonstrates cognitive bias.
Like many other cognitive biases, the asymmetry between positive and negative expectations has an evolutionary history. In a competitive environment, organisms that treat potential threats as more urgent than potential opportunities give themselves a better chance of survival.
Here’s the tricky thing: the way something is perceived isn’t always so clear cut. You can often structure the same incentive to feel like either a gain or a loss. Many people will go to great lengths to save on a $50 fee (loss) and are less equally motivated to obtain a $50 credit (gain), even though in either case you are giving them $50 back on their purchase. The end outcome is the same, but framing matters.
The bias of loss aversion suggests that it is more effective to frame user incentives (monetary or otherwise) as a way to avoid missing out instead of getting something extra. You might induce loss aversion by setting user expectations that they will get some benefit “for free”, but that it might be lost if they don’t sign up before the offer expires. Loss aversion is what makes tactics such as 30-day free trials and limited-time sales so successful.
Humans are fundamentally social beings – the principle of reciprocity is deeply rooted in our animal nature. Evolutionary biologists believe our early ancestors survived in the wild in a large part by learning to share goods and services through an adaptive mechanism called reciprocal altruism. Researchers who study this behavior in a more modern context have found that gifts don’t need to be expensive or even material to be effective — information, small favors, and compliments can have a measurable effect.
Identify ways to offer interesting and relevant pieces of information, useful functionality, and unexpected delight in your product. You might use these small moments in your user journey as triggers to ask something of them, such as creating an account, opening permission settings, or referring their friends.
Peak-end effect: People tend to place higher weight on how they felt during the peak and the end of an experience when recalling it from memory.
A peak is described as the most intense positive or negative emotional point during an experience. An example would be hiking to the top of Half Dome — as you’re taking in the breathtaking views from the top it’s easy to forget the physical and mental strain it took to get there. Conversely, when thinking back on a really bad memory, say a road trip that ended in a serious car crash, it’s much harder to recall the positive moments of the trip that came before the accident.
The peak-end effect suggests that when prioritizing product strategy, it’s better to build a higher emotional peak rather than reduce the number of minor low moments. It’s especially effective to get people to an emotional peak in the first-time user experience – they may not come back if the lows are what they remember.
Josh Elman from Greylock has talked about getting your user to the ‘aha’ moment as quickly as possible.
Cognitive dissonance: The mental stress associated with performing inconsistent actions or holding multiple contradictory beliefs simultaneously.
People are motivated to reduce mental inconsistency and actively avoid thoughts, situations, and information likely to increase it. Humans highly prefer to follow what they perceive as their own pre-existing attitudes and values.
Try asking or collecting information from your users that you can remind them about later. For example, Uber might ask its drivers “What rate of surge pricing would make you drop what you’re doing and get out on the road?” and send someone a push notification when their threshold has been reached.
Airbnb might let a host know that average nightly prices for listings in their neighborhood are now more than $______ due to seasonal travel demand. Amazon might add a price widget on the product page that says “Notify me when the price of this item drops below $______”.
Goal gradient effect: Most people are willing to put more effort into achieving a goal the closer they perceive they are to it.
Research shows that providing a sense of progress towards a goal accelerates the rate of goal achievement and increases program retention. A study conducted by a group of researchers from Carnegie Mellon and Washington University in St. Louis looked at the contribution rate over time for individual loans on Kiva, a microlending non-profit. They discovered that the closer a loan was to its 100% funding goal, the more likely lenders were to make a contribution and less likely to back out.
When trying to teach behavior in your product, first establish a ladder of goals for your user. As they make progress, support and motivate them with visualizations that show how close the next milestone is. When they reach milestones, create positive reinforcement by celebrating the achievement and offering a quick way to make headway towards the next one.
Social proof: People assume the actions of others when they are unable or unwilling to determine the appropriate mode of behavior, driven by the assumption that others possess more knowledge of the situation.
Social proof is a form of conformity. At its best, social proof drives the optimal solution or behavior to be adopted quickly and efficiently. At its worst, it creates information cascades and results in herd mentality.
Creating social proof is useful whether you’re a startup looking for its first customers or an established company trying to optimize a conversion funnel. If your company lacks brand power, affiliate yourself with those who have sway, like respected media outlets, VC firms, and customers. If you’re trying to get users to adopt a new feature, show off user-generated content, impressive engagement numbers, or individual success stories.