Improve your user research by tracking a simple bias metric.

wayneraymond
5 min readMay 31, 2016

When it comes to user research it’s important to make sure it is carried out without influence or bias.

I sometimes work with different company departments who want to be involved in research days, help to interview participants and independently analyse the results. I always see this as a positive. It would be much worse if stakeholders or colleagues didn’t see the value in testing products and services with actual users.

However, it’s important to make sure that anybody involved in the research doesn’t come with their own agenda, or influence the results of the research. Involvement with a set agenda leads to inaccurate reporting and effectively invalidates the results.

Let’s see how we can measure this…

Pre-requisite.

First let’s start with a pre-requisite. You will have already recruited and undertaken some user research. Typically this will be 10–15 participants in a single day. Secondly, you should have recorded all the interviews. The interviews are based on qualitative research not quantitative research. Finally, be prepared to spend at least a week analysing the results. If your product or service depends on the results of this research it’s important the analysis is done thoroughly.

1. Transcribe the interviews

This can seem like a long laborious process, but is it really?
Sure, it takes some time, but use this as a chance to absorb and reflect on the responses the participants have given you.

Edit: In the age of AI there are plenty of services that will automatically transcribe your recordings. usertesting.com has this built into their platform.

In reality you will probably achieve interviewing or testing your product on 10–15 people in one day for around 10–15 mins per participant.

Transcribing the interviews prevents anybody who analyses the research hearing what they want to hear and instead presents them with what was actually said. It reduces being able to misinterpret responses and cherry-pick sound bites without considering the full story.

Crack open Google docs and begin transcribing your interviews.

2. Analyse the transcripts

Once the transcripts are complete, analyse them. Break them down.

A. Remove conversational dialog.

Begin by removing any conversational dialog. Surveying participants is best done when it feels like part of a natural conversation, but when it comes to analysis it’s time to remove the noise.

Consider conversational dialog to be anything that isn’t a question of value that leads directly to an answer or insight. Remove the noise.

B. Highlight leading questions and responses gained through persuasion or influence in red.

Let’s look at two examples:

i. Do you have any goals, such as being able to lift more weights?

ii. What are your fitness goals?

In the first example (i.) the answer has been given as part of the question. It is very likely that the participant will respond by talking first about whether or not they lift more weights.

A better question would be (ii.), find out what the participant’s fitness goals are and allow them to answer an open ended question that you can then dig deeper into if you need to know more about their goals.

C. Highlight all open ended questions in green.

Highlight all the questions and responses that were done without influence or bias in green.

Up front you’ll see visually across the transcripts how much bias was in the research, but we’ll measure this properly once you’ve completed a peer review.

3. Peer review

It’s time to share the transcripts and audio files. Do this as soon as they’re complete and invite stakeholders and colleagues to peer review the findings. This is a fair and open way to make sure you haven’t added any of your own influence into the analysis.

If you transcribed the interviews inside Google docs, you will be able to open commenting and allow peers to comment on the transcripts.

4. Weigh the amount of bias

Finally you can weigh the amount of bias in the research. There is a very easy formula for this.

#biasquestions / #questions = result (%)

[25 questions with bias / 100 questions asked = 25% bias in the research]

Take the number of bias questions, divide them by the number of total questions asked (excluding conversational dialog), move the decimal two places to the right and you have your % of bias in the research.

5. What to do with the results?

Don’t just report the results, using them to cause conflict across departments. Use them as a starting point to continually improve your research techniques and analysis.

Upfront, if everybody knows that the results will be analysed openly and honestly, it will reduce the amount of influence and bias in the next round of research.

Ongoing, it can be used as a mark of progress to create a set of high standards by which to run your research. If you are continually testing your products and services, then surely it makes sense to check how well you are undertaking the research and make improvements to your own work too!

One last thing…

I have one final note when it comes to user research. In order to run a successful experiment, you should state a falsifiable hypothesis up-front before you start interviewing.

Whether you are testing a product or interviewing participants with surveys, the testing should be done consistently and repeatedly across every single participant, just like a scientific experiment.

If you do not ask the same set of questions or repeat the test in the same way each time, then you can expect a different set of results for every variant you make.

Whilst surveys and interviews should be as naturally close to a conversation as possible, your research may be used to change the direction of a product or service, therefore between the dialogue, the questioning should be consistent. The same set of questions should be asked to every single participant so the results can be analysed fairly and without influence or bias.

In a future article, I’ll look at analysing the results of user research and reporting the findings back to stakeholders.

Good luck!

--

--

wayneraymond

UX Design & Consultation | All articles, opinions & reviews are my own & don’t reflect those of my employer or associates.