A Guide for User Research Observers

In my research practice we have been moving away from the formal usability test model of 10–12 participants, copious notes, researcher performed analysis, and presenting a report to stakeholders. Instead we are trying fewer participants, stakeholders taking the notes, group analysis at the end of the study and no report. So far it is working well. Stakeholders are engaged and we can run studies faster.

I have been coming to terms with the loss of control in these leaner studies. They have reduced rigor but improved speed and stakeholder buy-in. It’s a challenge but I think it is ultimately worth it.

One aspect of this new method I am “working through” is facilitating the observation room. Many of the observers have never watched a usability test before so they perform “rookie” analysis. Some are sensitive to the issues we may find and will attempt to discredit the study or the participants. Others may micromanage the session to ensure we ask the questions they want answered. The tension can be infectious. The discrediting and micromanaging is frustrating and can make us feel defensive.

It is up to the researchers to educate our observers and empathize with them. We should provide a safe environment where discovering problems is a considered a victory not a design failure. Encourage observers to withhold judgement on negative findings for our team (and ourselves). Newbies will perform flawed analysis if we don’t tell them what and what not to look for. We need to find a balance between satisfying stakeholder requests and earning their trust. Show them we are the research experts and are working to get the most accurate and impartial data for the team.

To achieve this goal I have been thinking of different approaches. It could be in the form of a presentation, or a quick protocol to go through with observers before the study begins.

I give you my first draft of an observers guide. Please let me know your thoughts and suggestions…

Observers Guide

  1. Participants will struggle sometimes and that is OK. We are looking for improvements we can make before release. Discovering problems during testing is a win not a failure.
  2. Don’t get distracted by problems experienced by one participant. They could be an outlier and fixing something for them may make things harder for the majority of users.
  3. Focus primarily on the participant’s behavior. Sometimes what they say does not match what they do. Their behavior is more reliable.
  4. Be skeptical of hypothetical statements. When participants say “My mom would not like this” or “I would use this if I…” these are products of their imagination.
  5. Participants are not designers. When participants give specific design suggestions it is more important to capture what problem they are solving for, not the design specifics.
  6. We are here to observe, not design. We will have design sessions after we finish seeing all participants. If we design now we may miss important feedback or be designing for an outlier (see #2)..
  7. Check your prior knowledge. Our participants don’t have the same understanding of our industry, our internal processes, or technology. Try to suspend this understanding and capture problems from the participant’s point of view.
  8. Listen and be respectful. It is not easy to perform tasks while you are being recorded and people are watching you. If you stop and listen participants will often surprise you.
  9. If you have a specific question you want asked, write it down. Most goals-related questions will be answered organically or by the moderator’s probing. If not, there will be time for questions at the end.

Thanks for reading. What do you think?

Like what you read? Give MIKE RYAN a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.