Get into usability testing with a plan!

Set goals, plan and execute.

CanvasFlip
Jul 25, 2017 · 4 min read

It’s important to know what to expect, what to look for and what to take away from the tests you conduct. So just like every other venture and undertaking, you need to go into usability testing with a plan!

Here’s how we define our usability testing objectives and benchmark our usability metrics. I believe a similar approach back in your team would also help you in validating your digital projects.

1. Categorising your goals

Michael Margolis, a UX Researcher at Google Ventures Design Studio, believes the first step to defining objectives is knowing the right questions to ask [Questions to ask before starting user research].

It is helpful to start with a preliminary meeting with stakeholders to understand how much they know about the product — features, users, competitors, etc AND the constraints — schedule, resourcing, etc. Once this understanding is in place, you can ask the below questions to help focus the team on research questions (“Why do people enter the website and not watch the demo video?” versus just dictating methods “We need to do focus groups now!)

  • Relevant Product Information — Do you know the history of your product? Do you know what’s in store for the future? Now would be a good time to find out.
  • Users — Who uses your product? Who do you want to use your product? Be as specific as possible: demographics, location, usage patterns — whatever you can find out.
  • Success — What is your idea of success for this product? Make sure the entire team is on the same page.
  • Competitors — Who will be your biggest competition? How do you compare? What will your users be expecting based on your competition?
  • Research — is might seems like a no-brainer when planning your research, but what do you want to know? What data would help your team best? Is that research already available to you so that you’re not wasting your time?
  • Timing and Scope — What time frame are you working with for collecting your data? When is it due?

Gather your team together and pass out sticky notes. en, have everyone write down questions they have about their users and the UX. Collect all the questions and stick them to a board. Finally, try to organise all the questions based on similarity. You’ll see that certain categories will have more questions than others — these will likely become your testing objectives.

2. Knowing how to measure

You must first understand what type of feedback would be most helpful for your results. Does your team need a graph or a rating scale? Numbers or heat maps? Written responses or self explanatory data? It also depends on who is reading the data — Stakeholders are more likely to be convinced by the cold, hard numbers of a graphed quantitative rating scale, while the CEO might be made to understand a problem if he saw a video clip of users failing at a certain task.

Considering these parameters you need to take a decision of the ways you would want the UX insights. Based on this list you would go on to choose the tool you need for the usability testing.

3. Usability metrics

Metrics are the quantitative data enveloping usability, as opposed to more qualitative research like the verbal responses and written responses. When you combine qualitative with quantitative data gathering, you get an idea of why and how to a particular problems, as well as how many usability issues need to be resolved. The most helpful quantitative data gathering that we find are —

  • Completion/success rate — For a given task scenario, how many users were to complete the assigned task. We use conversion funnels to understand and visualise these numbers. For most of the usability tests we have performed in CanvasFlip, on the product as well as famous on famous app prototypes, this has been the most important metrics
  • Drop-off rate — It is somehow the complementary of completion rate, but it points out the design screens where most users leave the prototype. For this we again use the conversion funnel.
  • Error rate — Which errors tripped up users most? These can be divided into two types: critical and noncritical. Critical errors will prevent a user from completing a task, while noncritical errors will simply lower the efficiency with which they complete it.
  • Time to completion — How much time did it take the user to complete the task? This can be particularly useful when determining how your product compares with your competitors (if you’re testing one against the other). This is again a parameter that is displayed for each task on the CanvasFlip dashboard.
  • Subjective measures — Numerically rank a user’s self-determined satisfaction, ease-of-use, availability of information, etc. This is something that is very difficult to conclude from the insights from a tool. For our remote user tests, we have a survey at the end of the test and for in-person user tests you don’t need any such surveys@ First hand experiences are best to rank such parameters.

Final words

In some ways, the planning phase is the most important in usability research. When it’s done correctly, with patience and thought, your data will be accurate and most beneficial. However, if the initial planning is glossed over — or even ignored — your data will suffer and call into question the value of the whole endeavour.

All the best with your usability tests!

CanvasFlip

Written by

Collaboration tools for designers, copy writers, Front-end developers & more - http://canvasflip.com - design beautiful products 10x faster.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade