5 Tips and Tricks for Unmoderated Testing
A guide for anyone new to unmoderated testing or experts who want a quick reference guide
As part of IBM’s Watson Customer Engagement group, our team works on enterprise products for marketing, commerce, and supply chain professionals. With a B2B model, it can be almost impossible to get testing done effectively with our direct users. Just like us, they’re working professionals who barely have enough time in the day to do their own jobs right — much less participate in rounds of testing with our team.
So, in order to properly test at scale — our team invested in Usertesting.com to gain access to a broader pool of users some of which are largely identical to ours. This has sped up our usability testing process exponentially.
It’s also made us rethink our testing habits.
So… after a lot of of trial and error, we’ve compiled 5 simple tips and tricks that should help with streamlining your unmoderated tests.
Remember that in unmoderated testing, you will not be there to follow up on things or dig deeper, so you need to try to build as much of that into your questions and tasks as you can. There are a variety of different kinds of questions you might use, these tips should touch on most of them.
Always follow up with a simple “why?”
When asking definitive questions — rating scales, binary yes | no, multiple choice, and others of this kind, always follow up with a “why?” question.
This will give you far more information than just the rating itself.
For example — some people may give something a “3” on a 5-point scale, but then tell you that it was really good! Or, they may rate it well, but then follow it up with some constructive criticism.
The same rule applies with simple “yes” or “no” questions. A “no” can tell you a binary response, but it won’t explain the reasoning behind their decision.
Be consistent
Another good rule for rating scales is — don’t the switch ends of the rating scales from one question to another.* Why? Your participants are likely to not read carefully and mix the endpoints up, giving you a “1” for example, when they really meant “5”.
*Yes, in controlled studies it’s better to counterbalance. i.e. randomizing the questions so that they appear in varying orders.
However, in unmoderated tests you cannot follow-up with the user to understand if they misinterpreted the scale. So it’s best to keep a “Why” question immediately following each scale. Learn more about counterbalancing here.
Keep in mind that they may still do that even if you keep your endpoints parallel, because let’s face it, even the best of us are known to skim as we read things and miss those details. So, the earlier tip for “always follow up with why” still applies here too!
KISS — Keep it Simple Stupid!
When asking definitive questions— rating scales, binary yes | no, multiple choice, and others of this kind, never ask about more than one factor in a single question.
For example: “Was this experience usable and visually appealing?”.
Why? Questions, especially rating scales, asking about more than one concept will be unclear. These will leave you asking yourself — did they rate the usability, or the visual appeal, or both?
As you can see above, questions of more than one concept often confuse or muddy the responses.
But, what about with verbal answers? Surely that’s clear enough to understand?
Here, the same rule applies.
Split them into 2 task steps/2 verbal responses. The participants may or may not answer both questions.
Pay attention to sequencing
Splitting questions up can also help with reducing the level of bias you introduce to a task. Think about what you want to learn and how to appropriately order the questions so you learn without introducing ideas prior.
For example, if you want a user’s input on something that you plan on explaining further — ask them for their thoughts first. Then explain it, and ask if that makes sense to them or if something else would work better.
Why? It is better to get their first thoughts, unbiased by what your intention is. If you tell them first, they are more likely to simply agree with your interpretation.
This is important to consider for your whole test.
Look at the sequence of all of your questions. For example —
If I ask you “What color should a banking brand be?”
but then tell you during the first step …
“Blue is the best color for banking brands because it represents stability and calm.”
— then I already biased your response at the end.
That was a very literal example. But the message is still relevant. Consider the context of each question you ask to ensure that you’re getting the purest reaction from the user.
We’d love to hear more about your experiences with unmoderated testing. Have you done any? What did you learn about the process?
Please share in the comments!
Authors: Hannah Moyers, UX Researcher at IBM; and Thyra Rauch, UX Researcher at IBM.
If you enjoyed this article, feel free to clap 👏 to help others find it.
To learn more about UX research, join our community of over 5000+ UX researchers in the Mixed Methods Slack group. 💬
Prefer to listen in? Check out our podcast! 🎧