How Does User Research Go Wrong?

James Friscia
8 min readMar 23, 2023

Confirmation Bias in Practice and How to Solve for It.

Crumpled piece of paper

User research is an integral part of designing products that people want to use and buy. I have worked both in-house and as a consultant, for both Fortune 500 companies and startups, for both tech businesses and companies in more traditional industries — and without exception, the projects which succeed always involve many rounds of user testing. Successful products are pre-vetted by aligning their designs with users’ expectations before going to market.

However, simply conducting the research is not enough. In fact, user research done poorly may lead to more harm than good.

Where Does User Research Go Wrong?

User research can go wrong in myriad ways. Below are some of the most common pitfalls I have observed:

1. Poorly defined research objectives: Unclear objectives may cause researchers to gather irrelevant data.

2. Inadequate or biased sample: Selecting a sample that is not representative (or insufficient to draw conclusions) can mean inapplicability to the target audience.

3. Generally poor design: Questionnaires, interview scripts, or other instruments that miss the mark can lead to inaccurate or incomplete data.

4. Poorly executed methods: The researchers themselves can ask questions in an environment or in a way that leads to poor results.

5. Confirmation bias: Researchers can also unconsciously select or interpret data in ways that confirm their preconceived notions or hypotheses.

This last one — confirmation bias — can be particularly insidious.

I have seen Fortune 500 businesses lose tens or even hundreds of millions of dollars because they placed all of their faith in user research which was being interpreted in a way that only confirmed their pre-existing beliefs.

And even the most experienced researchers can succumb to confirmation bias.

Matchsticks with one burned

How Does Confirmation Bias Work?

In general, confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that supports our prior beliefs or values.

Confirmation bias in user research is when a researcher only seeks out data that confirms their preconceived notions or hypotheses, while ignoring or discounting data that contradicts their beliefs. For instance, suppose a researcher believes that users ought to prefer a particular feature in a product. In that case, they may ask more questions or collect more data that favor that feature — let’s call it Feature A — over the other alternatives. They also may avoid evidence that contradicts their belief in the superiority of Feature A.

Confirmation Bias in Practice

We may look at the academic definition of confirmation bias and think I would never do that.

However, confirmation bias can creep in for us all. Cognitive bias can be so powerful, with such wide-sweeping effects, because it is so subtle and unconscious.

How does it happen?

Imagine for a minute that you are a product leader — someone who believes deeply in what you are building. You treat it as your baby. Naturally, you will come to believe that certain things are better for your baby than others. You may have had many meetings with team members or stakeholders where you are presenting your views (e.g. “Feature A is the correct next path”). You may even have sent Slack messages or had “offline” conversations trying to lobby others to share your perspective. This is a natural part of product development.

Because of this passion — which is normally a good thing — you are a prime candidate to have some confirmation bias creep in. I might call this an intrinsically caused confirmation bias.

Perhaps even more common in the world of business is what I might deem an extrinsic motivation toward confirmation bias. This usually happens when incentives are misaligned, and I have seen this more times than I care to recall.

Here is one way I have seen this play out:

A big company has a high-profile initiative to launch a new product or service. Many of the people staffed to this new initiative are tasked with continuing to perform their current work duties while also doing the new work. Why would anyone agree to this? Well, it is usually because they are explicitly told, or it is implicitly implied, that if the new project is greenlit, they will be fully staffed to it. In fact, they will likely all get promotions.

Fast forward. The team has worked extremely hard. But after several months, what they have produced is just not seeming to resonate as clearly as they had hoped. Without nefarious ends in mind, and possibly even unconsciously, they may start designing or interpreting user feedback to better fit the positive story for their initiative.

To quote Scottish novelist Andrew Lang, people “use statistics as a drunken man uses lamp-posts — for support rather than for illumination.”

Photo by Atharva Tulsi on Unsplash

Here is another example of externally-motivated confirmation bias. How common does this sound?

A leader in a company is not known as the nicest or most reasonable person. Whenever he receives unwelcome news, there are often bad outcomes for others. And this leader makes his views obvious and LOUD regarding how the company’s product ought to evolve (e.g. “Feature A is the one true solution”).

Then research is conducted with the goal of presenting an unbiased perspective toward what new features ought to be included in the next release. And yet:

● The interview guide is written by an associate who rolls up in the hierarchy to that vocal leader.

● The manager conducting the interview is up for a promotion and cannot have bad news associated with them.

● The director presenting results reports directly to the leader is two years from stock vesting, and hates “rocking the boat.”

Ask yourself: what is bound to happen in the above scenario?

I have seen companies mitigate the above effects by having an external firm conduct all the research. Still, who pays their bills? Who will buy additional studies, or not, based on how well the first study is received by company leadership?

My Lived Example

I worked at a company who was making big decisions about the launch of their next product. Three or four internal researchers worked on the project. More than $150k was spent on external studies. Many stakeholders were motivated to triangulate user research results with other business intelligence to ensure a smooth launch.

The research came back with relatively clear perspectives on what was likely to work or not with our target audience. Yet, when creating the PowerPoint decks to present to the vocal leaders, the prime imperative did not feel like “truth.” Instead, the goal felt it was like presenting the results in a way that did not infuriate leaders or cause them to just disregard the whole study (e.g. “Well, I don’t agree, this whole research must just be flawed”).

Too often, the true “voice of the customer” was tamped down by the very loud, very powerful voice of leadership pursuing confirmation of their own ideas.

It took over a year for the product to come out, and it underperformed both internal and external expectations. A LOT of money and human hours were wasted.

So How Do We Combat Confirmation Bias?

The literature would tell you that to avoid confirmation bias, researchers should seek out diverse sources of data and actively look for evidence that contradicts their assumptions, while remaining open-minded and unbiased in their analysis of the data.

Okay, easier said than done.

Specifically, we can take the following steps.

1. Define research questions/objectives clearly: Rigorously define the research questions or objectives at the outset of the study to avoid being unduly guided by preconceived notions.

2. Use multiple sources of data: Triangulate data from multiple sources such as surveys, interviews, and observations to ensure a comprehensive understanding of the users’ needs and preferences.

3. Seek out contradictory evidence: Actively look for evidence that contradicts your assumptions and hypotheses.

4. Use open-ended questions: Ask open-ended questions that allow participants to express their views and opinions freely rather than leading or closed questions that may elicit specific answers.

5. Choose a diverse sample: Ensure that the sample is diverse and representative of the user population to avoid selecting participants who are more likely to confirm your hypotheses.

6. Conduct a pilot study: Conduct a pilot study to identify potential biases or issues with the research design before collecting data.

7. Use blind testing: Conduct blind testing, where participants are not aware of the researcher’s hypotheses or assumptions, to avoid any bias in the responses.

8. Be self-reflective: Be self-reflective and critical of your own assumptions and biases throughout the research process.

However, I still believe these may not be enough.

How do we get around the misaligned incentives problem?

Origami birds

Artificial Intelligence Can Help Reduce Confirmation Bias

Is AI a panacea for unbiased research? No. In fact, if it is misused, it can bring in more bias. That said, Natural Language Processing (NLP) and other forms of AI can help flag or remove bias from research. [Full disclosure: my company is currently building an AI tool for research that accomplishes exactly this.]

What if the interpretation of what users said, and the generation of insights reports, was done by a user research AI engine — comprised of Natural Language Processing (NLP) and generative AI models — to enable researchers and product builders to make informed decisions on user research quickly and while minimizing their own bias?

Here’s a high-level summary of how the tool works:

1. You state your research goals.

2. You upload videos or audio files of your research.

3. Our AI engine transcribes them.

4. The AI engine identifies the key text within all the interviews, then classifies and detects themes and keywords (and ranks them according to the project goals).

5. The AI engine also detects user sentiment.

6. Finally, our AI engine helps summarize key insights, quotes, and video snippets for easy sharing.

The AI engine automates several tasks where bias may have crept in:

● Transcribing faithfully from video / audio.

● Utilizing large language models and other generative AI to determine which themes and keywords in the user statements were most relevant.

● Automatically pulling quotes and editing highlight videos which reflect what the AI engine identified as top themes.

● Creating a stakeholder presentation based on the top identified themes.

Of course, some of the magic in research interpretation can come from a researcher being creative or having pulled an outlier insight from the haystack. But does this mean that all insights and takeaways should only be filtered through the human researcher(s)?

Shouldn’t it be clear to all stakeholders what the raw data said versus what the researcher thinks it said?

I, for one, will always want to know what a passionate, experienced researcher believes the results mean. However, I also want an AI helper like this, so that everyone has visibility into what a machine without its own incentives thought was most important, too.

If you are curious to learn more, contact me or sign up for our beta waitlist at CoNote.ai.

--

--

James Friscia

Director, advisor, and founder of digital platforms. CEO of CoNote.ai