Usability Testing

Mentor
10 min readJan 11, 2017

--

A Mentor Tutorial

Example Usability Testing Kit

This tutorial breaks down the typical usability testing kit used by Mentor Creative Group. In this article we will explain some of the hows and whys around our process for creating this important design tool. You can access a template for this kit from the Mentor website.

At Mentor, we use the term “kit” to describe what amounts to an outline for any kind of research activity. Before we get into the details, it’s important to call out that there are two main types of usability testing: moderated and unmoderated. A moderated test simply means that a human is leading the sessions instead of an automated tool. There are reasons to do one over the other which we won’t cover in this article. The main difference is that moderated testing is better for collecting qualitative data while unmoderated testing is better for large amounts of statistically valid, quantitative data.

Now that we have the two types covered, the rest of this article outlines an approach for conducting moderated, small scale tests on a recurring basis. For example, you might test with 4–5 participants every other week over the course of a 3 month period.

Regardless of your testing cadence, the goal is for your team to learn something then make the smallest changes to your designs that will have the biggest impact. Our study kit is optimized with this procedural framework in mind. If your organization does things differently, you’re milage may vary.

Let’s take a closer look at the anatomy of a usability study kit.

Introductions

Mentor uses this boilerplate intro with contextual variations for both interviews and testing sessions. The important thing to remember is that you want the interview to feel conversational, not scripted. If this script doesn’t sound like you, then write something that does while covering the same bullet points, and memorize it.

Replace the italicized content with the specifics of your project. For testing sessions, we like to give participants a little bit of backstory around the scope of our project. We will speak in general terms if Mentor is under DNA with a client but the participant is not. Still, setting the stage is always a good idea. That way participants know why you’re talking to them and ultimately why the interview is important.

If you only keep one part of this script, the bulleted list is most important. Testing with users is not a natural thing, and it’s very common for the participants to feel like they are being tested. This common misunderstanding makes participants feel like they need to provide “the right answers” when in fact there are no right or wrong answers.

To compound this issue, humans are social creatures seemingly wired to seek the approval of others. If a participant knows you designed what’s being tested, their natural tendency is to hold back and not say if something stinks. Let them know it’s OK to be critical and that the success of your project depends on their candid feedback.

Last but not least, you need your participants to think out loud. This is a make or break element of qualitative testing. If you find the participant is scrolling up and down a page without saying anything, give him/her a gentle nudge by saying, “tell me what you’re thinking right now”. A good moderator acts like a therapist by constantly trying to get participants to share what they are thinking and feeling without introducing their own bias. This is a difficult skill that takes practice. When testing you need to avoid saying things like “great” after a task is completed. Instead say “ok” or something similarly neutral in tone.

Observers

In a traditional usability study, one person moderates the discussion. Ideally your participant does most of the talking, but it’s important to keep a manageable tester to participant ratio. That way the participant doesn’t get overwhelmed. Our general rule of thumb is to take the number of participants and add one, which in a standard format means a two person research team at most. That being said, remote sessions can accommodate more observers and we sometimes meet with a tester in person while dialing in others to keep a small head count in the actual meeting space. If you take this approach, remind your observers to stay on mute for the duration of the call and that there will be some time at the end for group Q&A.

Regardless of the method, make sure to introduce other members of your team instead of ignoring them. If you’re in a situation where a client wants to observe a test but they’re the participant’s boss then you should kindly explain how that might color the conversation. You might not always get your way on this one, but always provide a firm rationale for why it’s not a good idea for a boss to observe a subordinate’s testing session. In most cases they will back down. If you’re taking notes and recording the session then you can tell the client that you’ll provide anonymized transcriptions after all the tests have concluded.

“Our general rule of thumb is to take the number of participants and add one, which in a standard format means a two person research team at most.”

Recording

Our researchers start a recording right before asking permission. That way we have verbal consent captured without needing to ask the question again or request that the participant fill out an intimidating consent form. If your interviewees is not comfortable with a recoding, then simply stop and delete.

As I write this I’ve never had to stop and delete a recording. Most people are comfortable with being recorded as long as they know how the recording will be used and who’s going to have access. That being said, you forgo the consent form at your own risk. We advise everyone to use their best judgement and adhere to their existing company guidelines. In the past we’ve done this differently but over time our researchers have noticed that removing the form goes a long way to keep the interview conversational. Go figure.

For an in-person interview concerning an existing product you should consider asking the participant to do a screen recording. This can happen on their machine using software already installed or you can boot up a product instance on your laptop and let the participant use your machine instead. If the participant is going to be using your device then make sure to print the script beforehand and assign someone from your research team as a notetaker. For remote sessions, you can just record the screen share regardless of who has the controls after getting consent.

“Do a screen recording and ask the participant to point out things in the product as they come up in conversation.”

Background Questions

These background questions serve two purposes:

  1. They help identify what persona type the participant aligns to.
  2. They act as an icebreaker before moving on to usability tasks.

Feel free to remix your background questions, but be mindful that a usability test is not a user interview. You might be thinking, why don’t I just kill two birds with one stone and conduct a user interview right before a usability test? It’s understandable that you want to collect more data during your sessions, but be mindful that these kinds of activities are a major mental drain on your participants and the more you cram into a single session, the more fatigued your participant will become. A tired participant is not going to perform in an optimal way during testing. Borrowing from Nielsen and Norman, you have an ethical responsibility to safeguard the mental state of your participants.

One of the key ethical requirements is to protect participants from mental anguish. We don’t want people to leave our study feeling depressed or worthless because they repeatedly failed at using an “obvious” computer system.

Screensharing

There’s a lot of different tools out there you can use for capturing data during a usability test, some even sophisticated enough to track granular details like a participant’s eye movement. At Mentor, we tend to stick with basic video and audio. As long as you can hear what they are saying and see what they are doing, just about any setup is valid.

Testing Outline

This is the basic outline we provide participants before jumping into usability tasks. Like the intro, you should make this your own. The purpose of this second activity intro is to remind participants to think out loud and that they can ask for help when they get stuck.

Depending on what you’re testing, the last paragraph might need some adjustment. Fidelity is not a term that most participants understand when it comes to design. For example, if you’re using an inVision prototype you might want to add a line about a prototype being a bunch of static pages stitched together and that their imagination is going to have to fill in the gaps. If it’s a coded proof of concept, you’ll have the opposite problem because participants will expect everything to work like a fully functional application and you’ll have to let them know that your prototype is mostly hardwired and things like search fields or filters might not work.

It’s your job to throw them a lifeline when they get stuck on fidelity issues during the test and some participants will get more hung up on it than others. On the plus side, this is also an opportunity for you to extract some interesting data around your design placeholders. We sometimes ask the question “what would you expect to happen?” after a participant runs into an invisible wall. You might be surprised by the responses they give.

At Mentor, we try to use personas in every aspect of our work, including testing. It’s often helpful to give participants a scenario so they understand the motivation behind each task. If you’re conducting testing on a project without personas, you can generalize what this paragraph says or cut it from the script.

Usability Tasks

Each usability task has a two part structure:

  1. The scenario that is read aloud to the participant.
  2. The steps to complete a task.

Your scenarios are most important. Using the persona description above, each scenario tells part of a story and helps your participants understand the context around the task they’ve been given. There’s a big difference between asking someone to act out a scenario verses performing a specific action. Here’s a couple examples:

A good scenario: You need to send a list of expenses on your company credit card for this month to Travis in HR.

A bad scenario: Filter the transactions on this page so you only see those that occurred in the month of January. Then download a PDF and email it to Travis in HR.

Notice that the good scenario is describing what the participant needs to do without telling them how to do it. Writing scenarios in a way that tells people literally what they need to do means you’re not testing if the participant understand the concept as a whole.

The steps to complete a task help you more easily check where people are dropping off in the workflow. If you have more than one way to complete a task you could even list both sequences and highlight the path they take as an added bit of data for your post testing debrief.

The headers for each scenario are optional, but we find they are helpful when generating an outline for testing or scanning during a session. Participants can take you to unexpected places during testing, and having headers as a reference point will help you keep your place in the script.

At the end of each scenario, there should be a line that helps participants understand that you are done talking and it’s time to take action. The blue, italicized text in the example above are examples you can use as needed.

We sometimes add emphasis to certain parts of a scenario if there are pieces of data given to the participant. In scenario 2, we are asking them to search with the bolded term “Chinese Wall Procedures”. If the participant forgets, then it’s easier for you to find what they need.

Wrapping-up

This list of three questions is pretty standard following a round of testing. Their primary function is to give your participant a chance to offer up some more conversational feedback that isn’t task oriented.

If you have observers on the call then this is a good time to open up the conversation. However, make sure to cut things off when there’s a couple minutes left so the participant has time to ask questions of their own.

You may find the “end recording” callout unnecessary, but I can’t tell you how many times I’ve left a recording running after an interview. In addition, it’s a good idea to immediately update any kind of payment sheet you might have that helps you or someone in billing know that a participant needs to get paid. If you’re not compensating your participants then you can take this part out.

“Leave time at the end for group Q&A but make sure the participant has time to ask at least one question of their own.”

Continued Reading

If you found this article useful, you can find a list of our other tutorials and templates on the Mentor website.

--

--

Mentor

We design and execute digital experiences that make a difference.