7 Steps to Great Usability Testing

Caylie Panuccio
SEEK blog
Published in
20 min readOct 15, 2020

Hello there. I see you’re into usability testing.

What I’m writing about below is not by any means the one-stop-shop guide to usability testing. It’s simply my process — and what I’ve found to work well through…doing this a lot.

Many of the steps below are also applicable to concept tests.

All right. On with the show.

Thanks to David for these amazing notes he took during the session!

What’s all this then?

Earlier this year, I was asked by my friend Kristen Hardy to present to the team at Telstra Health on conducting usability testing. I then spoke with Design Lead David Bacon to determine what would most benefit the team.

The steps below are drawn from content I’d created for previous in-house workshops both at SEEK and my previous workplace, and some stuff I along with my co-organiser Emily Murray facilitated externally at the meetup we run, Design Research Melbourne.

The 7 Steps of Usability Testing

  1. Setting research and testing objectives (a very important but usually neglected step)
  2. Working out who you need to test with (how to determine who you need, and recruit for that)
  3. Getting consent (SUPER important)
  4. Setting the scene (couching the testing in as much reality as possible, letting participants know if there are prototypes, and so on)
  5. Creating the interview guide (includes testing live sites vs. prototypes vs. concepts, plus facilitating and note-taking tips)
  6. Running the sessions (plus what to do when things go wrong)
  7. Actioning those findings (duh)

Dotted throughout each topic was getting stakeholders on board and involved.

Step 1: Setting research and testing objectives

This is the most important step — getting clear on what you want to find out.

Objectives are also known as questions, goals, outputs, or y’know, the reasons why you’re doing the research.

It’s helpful to think of objectives in two “buckets” — the things that you, the designer/researcher, need to find out, and the things that your stakeholders want to know. They will sometimes be different!

Prioritise what’s in your head, and what your stakeholders are telling you. Start with what’s most important for you to know, so you can change what’s not working in your product. Normally, I think to myself — “at the end of this project, what’s gonna make me sad if I don’t know it?”

There’s a few different ways you can start to define objectives. What I like to do is make a draft list of what’s in my head, and then:

  • Have a kick-off meeting with my stakeholders (this ensures they are more involved, and they have more ‘skin-in-the-game’ — therefore taking the findings more seriously)
  • Map any assumptions and hypotheses — what they think they know already, and why

Remember…

There are many pain points you could discover in any usability testing session. A clear set of research objectives will help you stay on track, so you don’t end up with a disparate set of findings, or little snippets that don’t really connect with one another. If you’re finding that there are a lot of research objectives or questions needing to be answered, perhaps you need two or three discrete usability testing ‘rounds’, each with a different set of research objectives.

Think about…

  • How realistic are the scenarios for your usability test? How ‘legit’ can you make them so the user doesn’t have to suspend their disbelief so much? This will ensure your research findings are more accurate — the more real the scenario, the more real the reaction!
  • How many scenarios are you testing? Can you fit them all in with enough time to probe deeper if necessary?
  • Do the scenarios reflect your research objectives?
  • How the objectives will help you to write the scenarios in your test — if you have a clear set of questions to answer, the scenarios should write themselves.
Here’s an example I prepared earlier. Scatter in some ‘life stuff’ to make the scenario as realistic as possible. You can also use information the participant told you earlier (e.g. if they apply for jobs late at night once their kids are asleep, you can use that).

Lastly…

Document, document, document! Documenting your research objectives is vital for the following reasons:

  • You and your stakeholders have an agreed upon scope to fall back on (useful in case of scope creep)
  • You and your stakeholders are on the same page about what this research is going to do, and why
  • When other folks in your organisation check out your research (and they will), they’ll know exactly what you were trying to find out. Recording previous research allows for further understanding of your users and prevents duplication of research effort — saving us all time and money.

Step 2: Working out who you need to test with

Work out the variables

Map out the different variables (or elements that differ) which apply to your participants. Back in ‘ye old days’, I did this by scribbling on a whiteboard or piece of paper. Now, I’ll do it in Miro. Here’s an example:

Even in Miro, I revert to Post-It notes

In the study above, I was conducting research with our Corporate users. That meant I had EIGHT variables to consider to ensure I was getting a decent representation of users in my sample. I mixed and matched the different types of variable I had under these post-its in their columns and then wrote an idealised recruitment spec based on who I was looking for.

Here are some more common variables you might use to recruit for usability testing:

  • The ages/age groups of your participants
  • How frequently or not they use your product or service
  • Where they live (important for accessing services)
  • What other products or services they may use
  • In some cases, their socio-economic status.

How do I even start to think about these?!

Map out what your participants must have and must not have:

  • Must have/be (e.g. must use the SEEK mobile app, must be actively looking for a job…)
  • Must not have/not be (e.g. must not have participated in any previous studies with SEEK, must not solely use SEEK to look for jobs…)

Then, think about general demographics, like age groups, languages spoken (i.e. English as an Additional Language), accessibility needs, and where they live.

Keep in mind, the more variables you have, the bigger your participant sample needs to be. A good rule of thumb is a minimum of 5 participants per variable, for example, product usage. Here’s an article for that.

Like the example above, start with your ideal recruitment specification and see what you can get. Then, you can loosen your variables a little. This is especially true of larger studies — pragmatism is important here, especially for larger projects in large organisations with some semblance of a deadline. Starting with the ideal shows you’ve done your homework and enables you to justify how, when, and why you made trade-offs in your recruit.

Creating a recruitment screener

This is where you write down exactly who you need to test with. Provide a summary at the start of the “must be’s and must not be’s” and list out the mix of demographics you are after.

This was for a very simple testing project.

Then, create a set of questions that will “screen out” who you DON’T want. For example, let’s say I want to test with people who are located in regional Australia (because its normally harder to find jobs in those areas), aged between 18–65, who don’t use SEEK frequently:

  1. Which of these best describes where you live?
  • Melbourne metro/inner suburbs
  • Melbourne outer suburbs
  • Regional Victoria…etc

2. Which age group do you belong to?

  • 18–25
  • 26–35
  • 36–45…etc

Depending on the complexity of the study, you can then map out in a table an example of what your recruits might look like.

You’ll need to get quite specific in some cases to remove any chances for error in recruitment. It’s best to spell it out as much as you can.

Of course, this assumes you’re recruiting through a recruitment provider. You can also conduct recruitment through pop-ups on your own website, or via your Sales/Customer Service teams, for example. Point is, always use a screener. That way, you know for sure your participants fit the bill and you won’t get any surprises in your session!

Step 3: Getting consent

Literally the reaction I get when I tell people they need to get consent…[Image credit: Know Your Meme]

Wait, but why?

Ponder this for a moment. WHY do we need to get consent from our research participants? What if I’m just having a chat with them? Surely getting consent makes the session all formal and scary and therefore they won’t be as honest with me if I put a fancy piece of paper in front of them?

Nope. Nope. NOPE.

By asking for consent, we create a space in which the participant feels safe to share.

Think about this scenario — you tag along with a Sales representative and have a bit of a chit chat with the client about how they find your product. That’s all very well and good, but…

Will that client be honest with you when you’re there, with a representative of your company, and there’s finances involved?

Let’s replay that scenario again. You turn up with a notetaker and say you’re a researcher. You explain…

  • What will happen in the research session
  • How the session will be recorded (if at all) — video, photo, note-taking, the whole shebang
  • What information will be collected, how it will be shared within your organisation, and stored
  • What needs to be protected internally (i.e. market sensitive information).

Luckily, you’ve got all this written down so the participant can take their time to read through it, ask questions, and feel comfortable. This document is also in plain English (no Legal-ese!) and you’ve got some translated versions in your back pocket, to conduct research in different markets.

Sound familiar?

Yep, I’m describing a consent form.

Think about the consent form(s) you may have in your own place of work/study. Where did it come from? Who made it? What’s in it? Why?

Does it meet the needs of your research participants?

Are you able to answer questions about it?

Using a consent form means that not only does the participant understand what’s going to happen, and what’s going to happen with their data — it also creates a safe space for them to say what they really think. You’ll get much more honest and rich data this way, and this will in turn help you improve your products by solving the real problems for all of your users (not just the users who already love your products, for example!).

Step 4: Setting the scene

Now you need to do some more expectation setting. This typically takes about 5 minutes of your allocated 60 minutes with the participant (assuming a one hour session).

Things to think about

  • Building some rapport with the participant. Be polite, but not too effusive— no one likes a robot, but you’re not there to make friends either!
  • Explaining in more detail what the session will be about, and what kinds of things you’ll be doing (i.e. interviewing, doing some things on the screen, on paper, etc.) This helps the participant mentally prepare to context switch if necessary.
  • Describing what a prototype might be like to interact with. We like to refer to these as “a website that doesn’t work yet”. If you’re testing concepts, be very clear they are in fact concepts (and don’t say you designed them!) — this means the participant will feel more comfortable saying what’s wrong with it. If you’re testing a live website, you can skip this.
  • Ensuring the participant feels comfortable to be themselves — and to be honest (building on the consent form idea)
  • Letting them know there’s no ‘right’ way to do things — we’re testing the usability, not them!

It’s important here to introduce yourself as a ‘researcher’ even if that’s not your actual job title! In this environment, everyone is a researcher. This reveals that you’re not attached to the products or services in any way, so the participant again feels comfortable to give negative or constructive responses. They also don’t feel like they ‘have to get it right’ to make you happy (we’re all people pleasers, after all).

Lastly…

Avoid using technical or jargon-y language. We might call things “platforms”, “prototypes”, and “features”, but our participants who don’t work in our industry likely won’t. We want to ensure our participants understand what we mean, so we get honest, clear responses, and the participant doesn’t feel like they have to ‘prove themselves’ by matching our language. Here’s some swaps:

  • Platform = website
  • Product = service
  • Prototype = fake website/website that doesn’t fully work
  • Feature = thing/thing you can use/thing you can do/part of the website

Avoid using language of ownership too:

  • Our website = the/this website
  • My design = the/this concept
  • Product owner/manager/designer = researcher

Again, this helps the participant be more honest as they don’t feel like they have to protect your feelings!

If you’re a product manager/someone who works on the product or thing you’re testing…don’t say so. Using the term ‘researcher’ neutralises your role within the product — you are literally there to just find stuff out, meaning the responses you get will be more honest, and the participant feels more comfortable sharing with you.

Step 5: Creating the interview guide

Right. Now that’s all out of the way, you can write your interview guide. I use ‘guide’ as you’ll inevitably go off-book at some point, but you will need a standardised set of tasks for the participant to attempt in usability testing. This means you can easily compare results across participants to understand what things need improvement.

Start simple

Always start your session by asking some simple questions about your participant. This builds rapport, gets them in the right headspace for the session, and gives you some info you can re-use about them later:

  • Tell me a bit about yourself?
  • Tell me what a typical day looks like for you?
  • What sorts of things do you like to do outside of work?
  • Talk me through the last time you used [product/service]? (and is that a typical experience?)

These questions will also help you to validate the answers they gave in the screener. Think about how you can build rapport with these questions, and relate them to the product or service you are testing. Allow 5–10 minutes for this part.

Getting into the detail — what you want to test

Now, you need to think about those testing scenarios in more detail. What are the particular features or user flows you want to test? How common are they?

I normally chat with my Analytics friends to find out what the data is telling us about the most commonly performed tasks on site, then build some tasks around those.

You will need to create scenarios around these that are as realistic as possible. You can use the answers the participant gave you in the first part of the session to add a bit more “flavour”. For example, let’s say we’re testing the MYKI website and the participant told you they normally top up at the station:

“OK, imagine you are waiting at a tram stop, but there is no MYKI machine. You know you need to add some money so you can ride the tram, so you pull up the MYKI website on your mobile. Your MYKI is already registered.

Could you please have a go at logging into the MYKI website?”

Be sure not to alter the actual task itself! The more stringent/complex the tasks, the less you should add flavour so as not to bias the research in any way.

Tips on writing test scenarios…

  • Make sure the tasks/scenarios start simple and get more complex as they go on. Ensure that the order of the tasks makes sense! (e.g. you wouldn’t make a payment and then login to a website).
  • Always make the scenarios as realistic as possible. If it’s a live website, ask your participant to come prepared with any logins/information they might need to input. Otherwise, give them a set of “dummy” details to use so they’re not having to think up fake information to input on the spot.
  • For a 1-hour session, allow approximately 40–50 minutes. This allows you to test 4–5 tasks/scenarios.
  • Be sure to run a pilot session beforehand (ideally with someone who is not familiar with your product) to make sure everything goes as planned. You can make tweaks to the wording and timing after this.
  • Avoid asking about future behaviour (e.g. would you do…). People are generally bad at predicting what they will do! Relying on past behaviours is a much better indicator of how people will respond to things.
  • Ask open-ended questions as much as possible. An open-ended question allows for the user to give a more unstructured answer, so you can find out the surprising things you didn’t anticipate. Here’s an article on this.
Here’s a little cheat sheet I made to train Product Managers on good question types. Note that in the “Good” examples — you’ll need to first establish whether they actually eat breakfast!
  • For the tasks, give the participant their own task sheet with the scenarios written on it. This means they have something to refer to if they forget what is being asked of them, and helps them feel more in control. If you’re using the Single Ease of Use Question or other metrics, they can fill in their answers themselves on the sheet.

Step 6: Running the sessions

Remember those stakeholders?

  • Before you run the sessions, send around the interview guide for approval. This keeps them engaged.
  • Keep them informed — let them know when you start recruiting, and the available time slots for note-taking.

Get your notetaker ducks in a row

  • Create a spreadsheet or physical sheet for your stakeholders to sign up to be notetakers. I used to stick a laminated sheet on my desk that people could come and write their names on.
  • Then, send a calendar invite with a buffer of 15 minutes before the session — this means they arrive on time, and you can use that time before the session to brief them on what they should be doing.
  • Think about how you want them to take notes. I implemented Dovetail at SEEK because it’s digital, easy to share/pull verbatims out later (for those ‘Do we have any research on…’ requests), and you can tag the notes up afterwards with your team. Electronic is always easier! Figure out a system that works well for you and your team.

Note-taking tips

  • I get my stakeholders to type straight into a blank Dovetail note or Word doc. This means all they have to do is listen and type — they are not worrying about typing answers under the right question (useful if you like to go ‘off-book’ and dig deeper at certain points).
  • Get them to look out for mistakes, whether the task is completed successfully, body language, verbal cues (“hrm, um”), misunderstandings of words/functions, and typical behaviours (e.g. she said “I always pay my bills via BPAY”)
  • If they do have questions for the participant — get them to note those down and ask them at the end. Having your session interrupted by someone asking questions is risky because you may lose control of the session. It also means the participant is having to focus on an extra person , which is distracting— they look like they’re watching a game of ping pong as their head keeps swivelling between you and the notetaker!
  • Capture verbatim as MUCH as possible. This is crucial for quick synthesis. The notes should look like a conversation.

Facilitation tips

Be quiet and don’t interrupt

You’re there to learn from the user, you’re not there to sell an idea. Be quiet. Let them talk. When a user is about to answer a question, or starts talking, you stop. Listen.

Neutral responses

Say things like “mm”, “cool” and “thank you” in response to what the user is saying. This shows you’re listening, but you’re not being positive (which could lead them to say things that please you), or negative (which could lead them to lie to avoid the negativity). In Japanese, this is known as “aizuchi” — all the little noises you make to indicate you are listening.

Play dumb.

Participant: “If I click Submit, will the employer get my application?”

You: “Um, you’re thinking…?”

You might be tempted to reassure them that yes, the employer will get their application, or whatever. But you’re not there to say what’s going to happen, you’re there to find out what they think. Playing dumb allows us to draw this out. And if the user thinks something that’s not our intention, it’s our job to fix it.

Be a parrot.

Participant: “I normally apply for jobs, um…at night”

You: “At night?”

Play back exactly the words that the participant has said. By using the exact same words and an upward inflection, you’re asking for more info without putting ideas in their heads. You can use this technique also to go back and dig deeper on certain topics — this is also an indicator to your note-taker that you want them to capture it, as it’s significant:

“You mentioned earlier that you normally apply for jobs at night. Tell me more about that?”

Pause.

After you’ve asked the participant something, pause. Wait a good 5–10 seconds for their response. Don’t ask another question, don’t try to expand on it or use different words. Often, people will need a few seconds to digest what is being said, and formulate an answer.
This is also true of looking at designs or completing tasks — give the participant some breathing room!

Bounce it back.

People generally like to please other people. This means a participant will often ask for reassurance e.g. “Is that the Apply button there”, “Is that right?” Just put it back on them by asking “What do you think?” It can be tempting to explain or answer — but understanding what the participant thinks is a learning opportunity for us.

Testing a live site vs. a prototype vs. a paper/static concept

  • If you are testing paper concepts, do more pointing and talking through individual elements on static or paper concepts (or get them to draw on it!)
  • Live sites are easiest to test with as they are the most “real”. This is where it can be helpful to ask the participant to bring their own information to input.
  • For prototypes, it is VERY important to make the data in the prototype as realistic as possible. Use real names, numbers, and information. Make sure there are no duplicates and everything is spelled correctly. If not, participants WILL comment on it, and it takes them out of what you are asking them to do.

What to do when things go wrong

This is what runs through my head when things don’t quite go as planned (Image Credit: GetYarn.io)

Participants go off track

I once had a participant complain about an experience with Telstra for 20 minutes. Note I have never worked at Telstra.

There’s a few ways to tackle this:

  • Acknowledge what they’re saying, but gently remind them why they’re with you — e.g. “That sounds incredibly frustrating/interesting/annoying, and if we have time at the end of our session today, we can talk more about that. But for now, I want us to focus on X”
  • Interrupt. I don’t normally condone this, but if you’ve got a talker, you will need to butt in and steer them back on course: “Thanks for sharing that. Could you go back to the screen for me, and talk me through what’s happening for you?”
  • If all else fails…just let ’em go.

The participant doesn’t fit the recruitment criteria

Sometimes, you discover that what the participant said in the screener doesn’t quite match what they’re telling you in the session. That’s OK — depending on how far you are in, you can politely let them know that they’re not quite the right fit for the study after all, and offer them a portion of the incentive (or the whole lot if you have cash to splash).

The participant gets frustrated or upset

Stop the task, let them have a break/get some water, and just move onto the next task. If they are really distressed, bring it back to a general chat about their typical use of your product, and let them let it out if they need to. Or, just stop the session and pay their incentive. It’s not worth pushing on at the cost of their mental health (or yours).

Your prototype breaks/the WiFi carks it

As a backup, have printed copies of the prototype that you can talk through. You could also have saved digital versions of the prototype (e.g. PNGs) and get the participant to interact with it as they would a working website — and ask them what they’d expect to happen.

The participant isn’t super responsive or chatty

You need to get creative. Ask them every possible question you can think of, always ask ‘why’ in response to their response e.g. “It’s good” Why? “It’s fine? Why? They are being paid, you can work to get a little more out of them!

Your participant wants to prove how smart they are

Sometimes participants will answer questions as if to say “Duh, don’t you know this?”. That’s fine. You’re not there to prove how smart you are or how much you know about the product. Play dumb and let them tell you how they think everything works. Use it as an opportunity to dig real deep.

Step 7: Actioning those findings!

Phew, you made it! You’ve created your objectives, recruited your participants, got consent, set the scene, conducted the sessions…so now what?

Remember those stakeholders?

These stakeholders are usually are made up of…

  • Other designers
  • Product Manager types
  • Developers
  • A BA or two
  • Occasionally, a very highly paid person.

Hopefully, you got them all to take notes for you at least once. Now, you’re going to get these people together and run a synthesis session.

Why would I do that?

Getting your stakeholders to help you synthesise means that they will feel more attached to the findings. You can do this in a few ways:

  • If you have notes in Word or Excel, get your stakeholders to transpose these to Post It Notes, and affinity map these as a group. This works well if you have a couple of hours on your hands
  • If your notes are in Dovetail or some other online tool, assign a note to a stakeholder and get them to tag it up using some pre-determined tags.
  • If your notes are in Excel, simply print them off and have a “cutting session” where you cut the notes out and then affinity map them.

Be sure to start and end the session with a discussion — get people to share what they found interesting/surprising from the testing. This means they come away with a broader view of what happened, rather than just their 1 session.

Now what?

To action the big stuff quickly, I print off the concepts/designs that I’m testing, and stick Post-It notes next to the bits I’d tested, summarising the findings. I’d put these up on a wall near my workspace and then the designers I was working with could get quickly stuck into making changes. This works well for rapid test and learning cycles, and you can also walk stakeholders through it. In our remote, digital world, you can do the same thing in Miro or another digital whiteboard tool — simply upload the concepts and put any commentary next to these in Post-Its.

Document, document, document

You’ll need to write up your findings — not just for presenting them, but also for longevity. Documenting our research ensures that we can justify certain design or product decisions, and also re-use findings instead of running research all over again, building our knowledge of our users over time.

This is the structure I follow:

  • Executive summary (why you did the research, and the high-level findings)
  • Key results — this is what you found out. Order these according to severity — if 5 out of 5 participants had trouble with something, that goes first! Throw in some verbatim and video to really drive home your points.
  • The insights, and your recommendations. This is the ‘so what’ part of your report. Not only should you include what you saw/heard (the finding or observation) — but also what it means for your design, product, or business. What changes need to be made? What should your team be mindful of going forward when making prioritisation trade-offs? What has broader implications? What are recurring issues/broad problems that haven’t been solved yet?
  • Then, in your appendices, list out your methodology (what tasks you tested, why, and how), the participant demographics, and the in-depth findings.

Bring your stakeholders back together to present these. Because you’ve kept them engaged, they will hopefully recognise:

  • Any assumptions or objectives they helped to shape
  • Verbatim from the notes they took
  • Findings from the data they helped to synthesise.

This makes it way easier to get buy-in for the big changes. You’ll need to remain close to your stakeholders to ensure any changes are implemented, and the broader insights considered as part of decision-making as time goes on.

The big finish

Practice. Get someone who knows what they’re doing to watch you. Accept feedback and iterate…yourself!

--

--

Caylie Panuccio
SEEK blog

UX Researcher in Melbourne, Australia. Ponders UX research technique, practice building, and language stuff.