Against Cognitive Bias: Tips and Tools for Design Managers
Improve user research with your leadership
Research enriches our work. Without it, our work does not deserve the label “user experience.” In research we must be aware of our cognitive biases. Anyone with a background in research knows about that. Our brains can play tricks on us. This is in reality not a fault. It is a consequence of the marvellous way our brain was constructed. Every perception is a blend of what is really there and what the brain expects. Hence, humans can fail to make rational decisions because our brains take mental shortcuts. Psychologists started doing research on this topic in the 1960s. They have labelled dozens of biases since then, and there are too many to counteract each of them individually. Too many to know them all by heart, even. What we can do is to provide general guidelines that work against as many of them as possible.
What Strengthens Cognitive Bias?
If we want to avoid cognitive bias, it is probably the easiest way to ask ourselves what strengthens it. Different biases occur due to different reasons (Myers, 2013, ch. 3). The most important are …
- Time pressure. If you do not have the time to think things through that increases the likelihood of heuristic processing. The anchoring bias, for example, is a result of being rushed.
- Mental capacity. There are several variables that affect mental capacity. Stress, noise, or fatigue to name the three most likely in a business environment. Reduced mental capacity increases biases like the confirmation bias, anchoring, or the bias due to the availability heuristic.
- Cognitive load. When you are preoccupied or you do several things at the same time some mental capacity is used up. You cannot invest it in the task at hand. Listening to music can also have that effect.
- Mood congruence effects fall into this category that make one spot something more easily if it fits our current mood.
- Incentives can be extrinsic (money, praise, critique, …) or intrinsic (striving for competence, need for popularity, …). Both may bias information processing into one or another direction. Humans sometimes interpret ambiguous information in ways that favour the best emotional outcome.
- Being expected to yield a specific result can distort your research. The experimenter bias is a famous example of that.
Now, does knowing about theses causes already help reducing bias?
Unfortunately, awareness alone will not prevent bias. Forewarned is not forearmed (Kahneman et al., 2011). However, you cannot implement countermeasures when you are the only one who is aware of the problem. Thus, awareness is the first step.
The best possible case is everyone in the research team is a skilled and experienced researcher, and — because of that — is already aware of bias. If not, teach about it. Teach about the reasons and how you cannot escape it but how you can alleviate it. Discuss counter measures and include them into your approach. Train the team to describe events in a bias-free way by sticking to the facts while avoiding interpretation. It is extremely delicate and requires practice. Make sure team members can give each other feedback to support continuous learning.
If you invite visitors to your research sessions, you may already give them a briefing about expectations and roles. Include bias as an issue in such briefings. Everybody involved in gathering and interpreting research data should know about bias and the procedures to counteract them.
Use every possible occasion to teach about bias to everyone. Speak at department meetings, in-house exhibitions, at the coffee maker and at lunch. Widen the issue a little to make it interesting to a larger target group. Not only research is affected by bias. Every single decision made in the company is. Managers will be interested in hearing it. We need anti-bias tactics to handle bias in the design process; we need user research as a tactic to handle bias in business development. Research suffers from bias — Research resolves bias. It depends on how you are looking at it.
Not doing research is the last thing anyone should do. Even as you get more experienced, that does not get you closer to your users. This perception, however, is subject to another bias: the hindsight bias. It makes us believe that our past predictions were better than they actually were. The more experience we get, the more we become prone to thinking we “knew things all along” (Goodwin, 2010). Goodwin (2010) summarizes studies that show such a decline is not necessarily the case. No level of professional experience can replace user research.
Several established and well–known companies I worked for had the wrong attitude towards research. Organisations can go wrong in several directions by setting expectations.
- Research is not a weakness
The organization sees your research effort as a weakness. At least in parts. You can find out if that is the case if your designers are worried that you may find “too many” flaws in the prototype, as that may reflect bad on their work. If that is the case, you have a problem in a wider audience of your organization. In such a climate research is not allowed to fail. It invites bias.
- Research lets a product fail (if it must)
Some organizations somehow adopted the notion that user research on a product idea is the first marketing campaign. You are showing future ideas to potential users and as such, it must look good. In such a setup, a product is not allowed to fail. It invites bias.
- Research lets a product pass
Research can also come under pressure to justify the resources it receives. It justifies itself by spotting problems, by invalidating assumptions, or by unearthing chances for innovation. If it cannot provide these things, why do we need research? If researchers need to justify their positions through their results, how can they gather and analyse data without bias?
Teach the organization that research in general and testing in particular is an essential part of our work. Everybody else seems to be entitled to testing their work. Software development reserves between 30% and 70% of its resources for testing. And it is not due to incompetent software developers that these tests are necessary. Every quality management framework in business demands that the output of our activities shall be evaluated. Hence, as designers we need to test what we design. Although it may sound simple, numerous teams even struggle to get permissions for user research. Once they do it, the organization is not always ready for it.
Do not accept statements like „how could you not find that flaw yourself?“ or „if you cannot design a product with so many flaws, we may have to find better designers“. When being confronted with criticisms as such, you have to intervene. Defend the team and the research. It is not clairvoyance that makes a great UX designer, it is the openness to user research. It is the courage to test ones’ ideas. Talk in the team about this. What are the common killer phrases in your organization and how can you respond?
However, it is not only the colleagues outside the UX team who must be convinced. Designers have a tendency to fall in love with their own solutions (not only designers, actually). You need a user-centred attitude in the UX team. User feedback is not a threat to ideas. It is what makes the work really valuable. If research contradicts an assumption, applaud and encourage them to use this feedback. Nobody in your team shall ever feel bad about a wrong assumption. Do everything you can to make them feel good about being smarter than before. Spool (2018) provides principles for successful critique sessions that may help you achieve that.
Safety in Your Team
Biased thinking is usually caused on a subconscious levels. That makes it hard to spot cognitive bias in ourselves (Kahneman et al., 2011). It is not recommended to work on research on your own. As such, the concept of a user experience team of one is a myth. There is no such thing (you may need that label for corporate communication though). User-centred design thrives on the friction that exists between people. Bias is one of the reasons why it works this way.
Giving each other feedback is easier when people have no reservations to talk to each other. Build a culture of open and honest communication. You must dispel any doubt that one will be punished for being human. You need trust in your research team (Delizonna, 2017).
As researchers we are trained to accept the views of our study participants. We are taught that every point of view is valid. We may not share it. We may even have trouble relating to it. But in research we see it as a challenge to overcome this, get past our limitations, and reach a new understanding. Do we treat our colleagues the same way? In my experience we often do. Mostly, because we need results at a deadline. It seems we do not have the time to empathise. Use your training within your team. Ask “why” instead of responding with an explanation.
Be supportive if something goes wrong. Make it a reality that everyone can get any form of support anytime. “Replace blame with curiosity” (Delizonna, 2017). Focus on solving the problem and teaching better ways. Teach by being specific like this article tries to be as specific as possible. Advice should never be vague. If your colleague does not nod and say: “I know what to do”, you have not achieved your goal. Do not just demand your team to be more like this or that. Be more innovative! Be creative! Be spontaneous! Every reader of this article might know such advice that is impossible to adhere to. If you act like that, you will leave your colleagues in a dilemma. You demand something from them without giving them the idea on how they can meet that demand. That does not build trust.
Diversity Within the Team
Now we have awareness and a team atmosphere of safety. Building on that you can look for diversity. Imagine your team like this: all members are of the same gender, same age, all of them in a part of their lives when they just have started their families, similar salaries, similar education, same company, same team. How likely is it that the colleagues in this team could be biased in the same direction? A group is more vulnerable to (so called) groupthink when its members have a similar background (Myers, 2013, p. 294). Hence, make it part of your hiring tactics that you seek diversity. Use it as a tool to uncover hidden assumptions and — in doing so — the underlying bias.
Also encourage critical evaluation. Make the “devil’s advocate” a tool in your meetings and analysis sessions and assign one. The devils advocate gets praise for critical statements. This tool is great to prove your point and to show that you are serious about embracing diversity. The goal is to welcome the input of genuine dissenters, “which does even more to stimulate original thinking and to open a group to opposing views” (Myers, 2013, p. 294).
Outsiders Increase Diversity
Invite stakeholders to analyse the data together with you. Often research results are solely presented to the stakeholders. After you handed over the presentation there are requests for more details. Researchers hand them over and lose control over the interpretation of the data. Your colleagues start making their own conclusions. Stakeholders begin to analyse details in the data without context, without knowledge of the research methods, and certainly without knowledge about bias. Your research gets a stain that you can never clean it from ever again. I have seen it happen so many times that.
If you encourage stakeholders to participate in the analysis of your data, brief them, too. That way you will be able to guide your stakeholders when they catch a scent your data and follow it. You also get a clearer idea of what might be interesting to them. Because of some positive side-effects it is a win-win.
- Not only does the team effort help alleviate bias.
- Your stakeholders get a richer picture.
- You get to know your stakeholders.
In any way, you should be careful whom you grant access to the raw data. If colleagues ask for more, invite them to investigate it together. Do not simply hand over your data.
Every research project should start with the assumptions. Everyone has some. If you do not counteract them, the confirmation bias can seriously affect your study. Bring the team together, put the assumptions on the table. If you know them, you can handle them better.
- Discuss the context and collect assumptions. You can also review earlier findings.
- Operationalize support: what observations could you make that support these assumptions?
- Operationalize contradictions: what observations could contradict your assumptions?
Now you have the agenda to a research prep meeting.
There is a second way to challenge preconceptions. Many of them become visible in your company through stories, short anecdotes about users. Listen to those anecdotes the people tell about your users. You should find a lot of them. Compelling anecdotes may seem more informative than base-rate information (Myers, 2013) and they can be useful if they have a basis in fact. But they can also be detrimental. Listen to them carefully and especially question generalizations. Some may tell stories about somebody in particular doing something particular. These are fair stories. But often stories develop a life of their own rather fast. “Somebody in particular” multiplies to be “that kind of people in general”. These are the stories your research has to counteract. And (believe it or not) a totally unfounded story can be very hard to disprove when it is diverting, funny to tell, or even absurd but seemingly real. Be prepared for many painstaking discussions.
Go Even Deeper
Every one of us has something that is important to them. One loves it to be seen as competent. The one next to him cherishes his independence most and does not like tasks with too many strings attached. Their colleague on the opposite side loves clarity and hates it when things get confusing. Every person values some things more than others. We have different needs. Those needs influence how we process information because they determine how we feel about certain situations.
Here comes an exercise for research teams. Every team should do that at least once a year. For an annual introspection you need a long list of nouns that describe numerous things people value. Things like independence, respect, honesty, integrity, sincerity, love, friendship, health, wisdom, clarity, wealth, power, looks, reputation, freedom, privacy, etc. The list will be a lot longer. Now every team member looks for 7 items. 7 items they would like to put on top of their personal list: what is most important to you? What affects me the most when it is missing? These things may influence you when you conduct a study.
It helps to discuss this in the team. Put the items up one by one and ask everyone to imagine situations in their projects that may violate this need. This way the team does not only grow by learning about others’ professional experience. They learn what they are susceptible to and when it may make sense to look for a tandem partner in their project. As a final step discuss what countermeasures could be taken when someone faces any of these situations.
No Shortcuts During Analysis
Since time pressure can affect the strength of biases negatively, it makes sense to avoid it altogether. That is not easy in today’s business world. Time was always valuable in business. And agile and lean initiatives seem to put just more time pressure on all activities (even though I believe that is actually a misunderstanding, it often happens in my experience). Colleagues and me have experienced the pressure to become faster, be more “pragmatic” (as if we were not pragmatic, already). So, this section has to argue against common trends. It has to because we only need research that is worth doing
Biases like the gambler’s fallacy, the confirmation bias or the anchoring effect have less of a chance if the data is presented all at once (Barron & Leider, 2008; Hammond et al., 2006). Make a clear distinction between the different phases of the research process. Collect the data first — analyse it later.
Of course, that is difficult to do in qualitative studies. On the one hand you want the facilitator and the note-taker to help analyse the data. Unless we have a word by word transcription, video recordings and enough resources to do the analysis with a new team, we want their first-hand experience in the analysis. On the other hand, that opens the door to all kinds of biases. To avoid any such effects you can at least bring more people in to analyze the data. Two acquire the data, three (or more) analyse it. This way you bring in someone who did not see the data coming in bit by bit. The different perspectives and different biases force a discussion about how to correctly interpret the data.
There is another shortcut I have seen become quite popular these days. A group of people watches the research sessions together. Everyone writes post-its during the session and puts it on the wall, immediately. There is no time to think things through, no time for elaborated discussions, no time to try different groupings. This is the perfect nutrient for all kinds of biases. If you do research to support important decision, you should not use this approach at all.
A moderator can be used in many situations. The procedure by Kahneman et al. (2001) to reflect the recommendations that are the result of the research is highly recommended. The moderator has not been involved in the study to the point you bring him in. Kahneman et al. (2011) even suggests to choose someone completely independent from the research team.
Every moderator should be aware of bias and trained in bias-suppression tactics like everyone else in your team. Most of all, they should be aware of the twelve questions too by Kahneman et al. (2001) devised to reduce bias in business recommendations. Together the moderator and the team should go through a recommendation critique.
People tend to give general answers such as “always be empathetic!” Although it seems like a sound idea, how does one put it into practice? What do we have to say or do in order to achieve that? Such advice is too vague to be of much help. Some other known advice to avoid bias is to “focus on the data”. However, if data were unambiguous, it’s interpretation would not be affected by bias in the first place. Since it is ambiguous, how can “focusing on data” help? The best interpretation to focusing on one’s data is to immerse yourself with what you have without rushing or cramming. That can be put into specific guidelines. Your schedules have to make sure there is enough time and that resources are available.
Data is always a subset of everything we could have investigated. Of all the questions you may want to ask, you have to pick some. You have to phrase them. You can influence the answers through body language and other subtle indicators (which is called the experimenter bias; Myers, 2013). These examples show how useless such general advice can be. Being specific is an important countermeasure against bias in itself. Furthermore, design management needs specific practical tools more than vague hints. Design management needs to be tangible to be understood in your company.
How far shall you go?
If you follow all the suggestions above you have quite a job to do. Your research will be more thorough. But bias is not the only challenge on your list. You will require more resources. How much effort are you willing to invest? And how much do you favour quality over speed? All the suggestions above enhance effectiveness at the expense of efficiency. Unfortunately, there is no clear line for you to decide. There isn’t even a clear line to separate the dirty in “quick and dirty” from trash. I believe this to be one of the reasons why we have such a hard time defending methodological diligence. How efficient your team must become depends on your business. What role does User Experience Design play in your business? Is it more of a hygiene factor, is the user experience one of your unique selling points, is it the driving spirit for innovation? Your company will be located somewhere between these poles. As a design manager this is your first research question you shall answer without bias. What is the role design plays in your company?
References and Further Reading
Barron, G. M. and Leider, S. (2008) “Making the Gambler’s Fallacy Disappear: The Role of Experience”, Harvard Business School Working Paper, [Accessed October 14th, 2018]
Benson, Buster (2016) “Cognitive bias cheat sheet”, [Accessed October 14th, 2018.]
Delizonna, Laura (2017) “High-Performing Teams Need Psychological Safety. Here’s How to Create It”, Harvard Business Review, August 2017. [Accessed January 20th, 2019]
Edmondson, Amy C. (2018) “Safe to Fail”, OrganisationsEntwicklung, 3, p. 19–23
Goodwin, Paul (2010) “Why hindsight can damage foresight”, Foresight, 17, p. 5–7
Hammond, John S., Keeney, Ralph L., and Raiffa, Howard (2006) “The Hidden Traps in Decision Making”, Harvard Business Review, January 2006. [Accessed October 14th, 2018]
Kahneman, D., Lovallo, D., and Sibony. O. (2011) “The Big Idea: Before You Make That Big Decision…”, Harvard Business Review, June 2011. [Accessed October 14th, 2018]
Myers, David G. & Twenge, Jean M. (2013) “Social Psychology”, New York: McGraw-Hill, 11th edition
Spool, Jared M. (2018) “Four Elements of a Well-Done Critique”, [Accessed November 1st, 2018.]
It is possible to create a social environment that promotes bias in such a strong way, that whole organisations become paralysed in a state of self-deception. Such examples are spectacular. But more importantly they broaden the horizon and sharpen the view regarding what can go wrong. I recommend “Willful Blindness” by Margaret Heffernan. The stories and reasoning is well researched and Heffernan’s fluent storytelling makes the book an easy read.