UX design: how to conduct usability testing?

Cover image by Daniela Peñaranda

First things first: even if we often read about “user testing”, I think we’d better talk about “usability testing” 🤓 — obviously we’re not testing the user, but our solution!

There already are many resources about usability testing; this is the reason why today, I’m writing about it in a pragmatic way in order to give a lot of feedback and advice to anyone willing to dive into this fascinating activity. 🔧

Sometimes our solutions need some adjustments 😉

At which stage of the project should I plan usability testing?

Usability testing, which probably is the most common user research method, deals with projects of which a “solution” has already been given a tangible form, e.g. thanks to zonings, wireframes and, if the solution is not a draft anymore, even mockups, static or interactive prototypes, whether they be throwable or iterative ones. Then, the main goal of usability testing is to measure how well this material is … easy-to-use. 💯

So, to me, usability testing applies at a key-stage of the project: when one can start seeing or using a concrete material. 🚀 If your project is not at this stage, there are many other user research methods that apply before having implemented a solution, for example: one-to-one interviews, focus groups, surveys, shadowing …

How many testers do I need so my usability testing gets reliable?

We could believe that meeting as many people as possible then putting them in touch with our product would provide the best feedback about what works well and what does not …

In fact, you must know that usability testing takes a lot of time to plan, run and analyze. So, conducting dozens of usability tests if you’re not able to spend time analyzing their results is a complete waste of time! 🙅‍ Otherwise, if your goal is to get quantitative results about your product, other user research methods exist like the surveys above-mentioned.

In addition, after several usability tests, you’ll notice that you’ll begin to discover the same feedback again and again. 😱 Basically, the more you conduct usability tests, the less you discover new feedback. It’s kind of logical, isn’t it? One says that a 5-tester sample is of an optimal size: thanks to it, 90% of the most important insights are gathered and you don’t waste time dissecting the results. 🔥

Tom Landauer and Jakob Nielsen proved the number of usability problems found in a usability test with n users is N (1-(1- L ) n ), where N is the total number of usability problems in the design and L is the proportion of usability problems discovered while testing a single user. The typical value of L is 31%, averaged across a large number of projects they studied.

And when your 5 testers’ feedback has been analyzed and your solution has hopefully been optimized, you can (should?) iterate!

You can’t recruit 5 testers? You can be sure that 1 tester is always more valuable than 0 tester! 👍 By the way, I love this quote from Jakob Nielsen about it:

Zero users give zero insights — Nielsen
The most striking truth of the curve is that zero users give zero insights. As soon as you collect data from a single test user, (…) you have already learned almost a third of all there is to know about the usability of the design. 💪 The difference between zero and even a little bit of data is astounding.

Finally, rather than asking yourself how many testers you should recruit, I think that you’d better put your energy on the reliability of your sample: are your testers representative of your user target? 🤔

How to recruit participants for my usability test?

You’ve probably heard of companies that recruit users for you. It certainly is very comfortable but it is also very expensive. 💵 It’s about paying a certain amount for each user, recruited by this specialized organization. I’ve personally never made use of this kind of service. Don’t hesitate to comment on this article to share your experience with me/us.

When you’re in a “startup mode”, you have to go fast and more importantly, to go cheap. I can advise you to draw on your own network of acquaintances 👋 and to make some calls to:

  • A colleague that has not taken part into the design of this product
  • Friends and family
  • Any acquaintance whose profile corresponds to your target

As a company, you probably have access to a goldmine: your social media presence. 🌎 Don’t hesitate to call on your Facebook, Twitter, LinkedIn or Instagram “communities”! They will probably come forward to help you: because they’re convinced, kind or even curious. ☝️

At Poool, I’ve branded this testing experience and created “Poool’s Testers Group”. I invite volunteers to fill a form and to join it. Each new volunteer receives a “welcome & thank you” email that warns him/her that we’ll get in touch as soon as a new test that fits with his/her profile is available. ✉️ And it works!

One of our “appeals to testers” on Poool’s Facebook page
Welcome page of our subscription Typeform

Don’t forget the European General Data Protection Regulation (RGPD) that applies to any personal data you will collect and process about your testers… ☑️ Collecting their consent in advance is mandatory. To do so, I use a “Legal” type question on Typeform.

”Legal” type question on Typeform

If you have to recruit users from scratch, without using your network, you can think about “guerrilla testing” also called “hallway testing”. The aim: make people that you’ve just bumped into in the streets test your solution. 🏢 Depending on the context of your product, the location and the incentives you’ve planned, it can work and it can allow you to reach new audiences.

Finally, online tools for planning and observing remote tests exist. In the case of digital products, this kind of tools can serve two purposes: reach people that could no come into your lab or that you could not visit 👀 + reach new people, if the platform also offers user recruitment. ⚠️ Be careful in this last case: your test scenario has to be strong and you have to pay attention to the results provided by such a sample, that could be qualified of “pro in usability testing”. In other words, they could be biased.

How long a usability test has to last?

Always remind that your testers’ time is as precious as yours: 🎁 they offer availability and involvement to help you and provide you with valuable insights… Don’t waste this ressource and try not to make them wait or move for nothing.

When it comes to test duration, one says that 60 to 90 minutes is an optimal duration. ⏰ From my point of view, 60 minutes is already long for a participant that has to concentrate on task completion… I always try to make the “practical work” part fit into 45 minutes, and that’s 20/30 more minutes left for introduction and conclusion.

Staying concentrate during a 60–90 minutes test requires energy

👉 It can be interesting to ask your testers what they thought about the test duration in your post-test questionnaire. It can help you adjust it for your next test sessions.

Extract of one of my post-test questionnaires. A tester once commented this question by saying: “It was not long, but it should not last longer” 😉

How to prepare a usability test?

The testing material

First ask yourself about the goal you want to reach with this test. Depending on that, put in place the testing material that will help you reach it. 💭 You want to choose between 2 product scenarios? Zonings are enough. You want to check whether a screen is user-friendly or not? A wireframe should be perfect for this. You want to validate the attractiveness of a graphic design? A mockup is what you need. You want to test the user flow? An interactive prototype would be ideal.

Tools like InVision help you quickly implement interactive prototypes

The test plan

Then move on to the test plan. ✍️ You must write a whole scenario which your participants will dive into, to achieve the requested tasks. To my opinion, this is the most sensitive stage because each order you give must respect a precise balance: ⚖

  • giving enough context to participants to nurture them, without giving too much of it, not to risk overwhelm them with information they won’t integrate or take over

💡 For example: “You are looking for new shoes to run in the forest twice a week” is enough to give some context to your participant. There is no need to invent a precise life for your participant like if you were writing a persona. On the contrary, the specific expectations of your tester are interesting to discover.

  • giving an order that is enough detailed so that your participants don’t get lost, yet not guiding their actions strictly, which would create a big bias in the test

💡 For example, prefer “You decide to buy these shoes” over “Click on the big blue button on the right to add the shoes to cart”.

Task wordings have to be readable and printed on cards 🔍

Number each task and print it big on a dedicated card. Make sure tasks are not too numerous. I will talk about it below.

Observation sheets

Then, you have to prepare your task observation sheets. 🔍 For each of your orders, create a document that will allow you to follow the achievement of the task: 📄 write down the number and the wording of the related task, the expected actions and the questions to ask. Insert free spaces dedicated to the input of the results and observations: success rate (0, 50 or 100%), time on task (this indicator is not necessarily relevant depending on the task), occurence and seriousness of the observations…

Print as many copies as the number of existing tasks.

Pre-test and post-test questionnaires

Also prepare your pre-test and post-test questionnaires. These quick surveys are very useful.

☝️ Pre-test questionnaires have 3 big advantages:

  1. have participants wait if you need to make a few last adjustments or finalize some details before the experiment starts
  2. make the participants relax with this first quick and easy activity that immerses them softly into the session
  3. better qualify your participants’ profile in relation to the context of the tested product (for example, at Poool, this is when we ask the testers whether they are used to read the press, read it online, to use digital devices…)

☝️ Post-test questionnaires let you complete the observation of the practical work by writing down the participants’ feelings. This is the moment to question them about what they thought of this certain flow, that color or this precise feature… 💬 This is really funny to see how participants sometimes struggle with a task but then consider it as completely seamless in the post-test questionnaire, or the other way round. These interesting details allow you to nuance the gravity of a friction and the general result synthesis.

Testers quite appreciate these questionnaires and they are truly useful to better understand the results.

This can also be the occasion to get the participants’ opinion about “off-test” items, like a graphical variation, other research ideas or, like I wrote above, how the test ran.

The test of the test 🙃

No, it’s not a UX joke! Always test and time your test, because — I promise — you’ll inevitably forget something. Someone from your team will kindly offer some help to test your scenario with you. 🤗

Your colleagues can help you test your test 😇

👍 Testing the test allows you to adjust its duration and eventually the number of tasks to achieve, to correct wrong task wordings or unclear questionnaire items, to set up the material differently, to remind the key moments when you must take part …

Location and testing equipment

You can do what you want. But if it’s possible, better make the most of the testers’ environment: the place they live or work in for example and their digital devices. Doing so, they are more confident and can achieve their tasks like they would do it on their own. Moreover, using their own equipment lets you discover use cases that you’d never have met in your office. “Damn, it looks like that with old smartphones? Well, this is different on Edge…” 😉

Anyway, we all know that it’s not always possible nor easy, for us as for the testers, to run the test in their environment. In that case, you can welcome them at your office or even in a place that you have rent for the occasion and that you have refitted into a useful “laboratory” where you can observe the experiments. 💻 In any case, you must make sure you have backup equipment to run the test no matter what.

The “equipment” topic does not only relate to computer hardware. Don’t forget to also take paper, pencils and any complementary material required for the test to run smoothly. As part of some of my tests, I happened to print false confirmation text messages on mobile mockups or to build false credit cards on cardboards. These small details contribute to make the test more realistic and smoother.

False text messages, false credit cards … these are some UX designer tricks to run realistic tests 🤓

How does a usability test take place?

Invite your participant

Once your participant volunteered for the test session, you have to find a time slot. This will be your first interaction with him/her so build a simple and warm relationship. Thank him/her for helping and try not to unveil the exact topic of the test, even if you can communicate its overall purpose. For example, so you don’t create any bias, don’t say “You’re going to test our brand new mobile app for innovative networking” but “Your participation will help us a lot to make our next digital product better”. Reassure your participant about the test duration and encourage him/her not to look for any further information about your company or your products before D day.

Don’t forget to thank your volunteer— well, you can do it gently 😁

Welcome your participant

The testing experience has already begun when your participant arrives (or when you arrive at your participant’s place). Don’t make him/her wait and settle him/her in comfortably to fill in the pre-test questionnaire.

Don’t hesitate to offer a cold drink, a hot drink and/or a snack. I had read a good piece of advice somewhere, that said it was important to highlight them because most of the time, testers don’t dare accepting the first invitation to have a drink or a snack.

Present the test

Once the pre-test questionnaire is filled, I advise you to present the goal of the session and how the experiment is going to take place. As a general rule, here are some sentences you can say ou remind, in any order you wish:

  • “Thanks again” 😜
  • “There will be 3 stages: the pre-test questionnaire already filledin, the experiment then the post-test questionnaire”
  • “It will take about 60 minutes”
  • “You’ll have to achieve several tasks, one after another: the idea is to see if they are easy to achieve or not and ow we can make it easier”
  • “We are not going to help your achieve these tasks and we won’t take action, react nor answer your questions during the experiment. However, no worries, we will talk about it as soon as the test is over!”
  • “We’re going to write down a lot of things, and that’s perfectly normal.”
  • “This is only a test, nothing’s saved, whether it be a subscription, an order, a payment, or anything else.”
  • “We’re not testing you, but our solution”
  • “Please be honest. We won’t take anything badly. Your feedback will help us anyway.”
  • “There are no wrong or right answer”
There are no wrong or right answers

👉 These words aim to reassure the testers who, even if they don’t tell you, are probably a little bit anxious.

Observe the task achievement

I highly recommend observing the completion of the tasks with a partner. 👥 It’s really easier and more complete. For example, one of the two people can concentrate on leading the test and making the scenario progress while the other can dedicate him/herself to observing how the participant reacts, verbally or nonverbally. Being two allows you to dismiss interpreting mistakes while analyzing the results.

Two pieces of advice for this observation phase:

  • 🤓 preserve an observer’s neutral and external attitude: you’ll become a pro of vague but encouraging “uh-huh”s and “okay”s
  • ✍️ write down everything on your observation sheets: blocks, hesitations, comments, movements … and other quantitative requirements like success rate ou time on task, even if this metric is not always relevant to your project.

From my point of view, I find it quite difficult to write everything down without degrading the fluidity of the exchange. With time, you’ll have to find the right balance. You can consider recording an audio and/or a video version of the screen and the whole test 📹 (with the tester’s consent of course), but you’ll need time to analyze hours of multimedia material generated.

Thank your participant

Once the “practical work” is over, the post-questionnaire is filled in and the final exchange about the participant’s feelings is done, it is time to bring the session to a close. ⌛

At this time the “incentive” question often pops up to mind: do I have to pay him/her, to offer him/her a gift or something? Well, it’s your decision. If you have the means to pay your testers, you are very lucky. But make sure that your testers have not only come by interest and that their remunerations haven’t got any influence on their answers. If you can’t pay them, perhaps you have a small “gift” budget that would probably please them? 🎁 Here, the danger is to fall into the trap of useless goodies that create waste. So it’s time to be creative and build bespoke digital attentions!

Gifts definitely make people happy but pay attention to branded goodies; most of the time they are useless but on the contrary … they always create waste 🙂

Apart from that, no worries, your thanks and your eternal gratefulness will probably be enough for your volunteers, who answered the call because they are kind before anything else. 💝

How to analyze the results of a usability test?

I don’t think there is any universal way to process the results of your usability testing. It depends on the complexity of the tasks to achieved, on their identified success criterion and also on the number of involved testers. 🤔 I guess it’s difficult and hazardous to adopt a statistical approach on a small 5-people sample …

With a more qualitative than quantitative approach, which I personally feel more comfortable with, here what I can advise you to implement.

First list the whole feedback, with no exception. Associate the seriousness and the number of occurrences of each piece of feedback. Then you will be able to sort them by process priority. In project management, there is an interesting concept called risk evaluation. My courses are far but I can remember the R = P x S formula, where R represents the risk criticality, P its probability and S its seriousness. This formula applies quite well to optimization prioritization. After all, it sounds like common sense. 🙂

Don’t forget to link gathered feedback to testers’ qualified profiles, in order to nuance some results if needed. Have a talk with your partner to make sure you share the same vision. 👥

Working with a partner always is a good idea to avoid interpreting mistakes

How to share the conclusions to usability testing?

In the end, communicating results is a key point of the usability testing approach. 📣 Why invest all this energy if the final data is not shared with the whole team to turn it into advantageous actions?

Results can take the shape of either qualitative or quantitative feedback. Presenting them to the team allow you to integrate all your colleagues into an optimization process, to step back and think altogether about the product and to raise awareness among the team about the concept of user empathy. 🧗‍

In concrete terms, at this point you have a list of feedback in hand, sorted by their process priority.

  1. Start with expressing the most positive feedback. I’m sure they are numerous, and most importantly, they matter, they have to be shared.
  2. Then, communicate sticking points, the ones you must solve for the project to be viable.
  3. Then share less critical feedback, the areas for improvement that you must remember and monitor, but that don’t threaten the project directly
  4. To finish, you can announce neutral feedback if you wish, the observations that don’t call for action because they are too specific or not necessarily negative …

To share all of this, a color coding can be interesting. 📗 Green, 📙 orange, 📕 red: it’s easy to understand, and more importantly, it’s more pleasant to see that red feedback also live with green feedback.

Result sharing should not mean sermonizing: the team is here to discover the results, take them into account and at the end find solutions, altogether. This is the moment when the testing scientific method matters: don’t forget to explain the strict testing process, to illustrate feedback thanks to screenshots, quotes or statistics … The team needs to understand that it’s about users’ feedback, not about yours.

Test results are factual, almost scientific. They allow the team to agree on non subjective actions.

Usability testing … to be continued?

After your user test, I can’t recommend strongly enough to get your solution optimized … and to test it again! 🎉

Watch out for a second usability test!

You get it, you must learn and respect good practice in order to conduct a relevant, efficient and enjoyable user test. However in my opinion, usability testing deals more with empirical skills and reflexes. So don’t hesitate anymore and go for it! 🏃

If you’re interested in user testing, I warmly recommend your to pursue Interaction Design Foundation’s certificate “Conducting usability testing”. It allows you to educate yourself about it very qualitatively!

This organization offers professional and complete courses about numerous topics of UX design.

Don’t hesitate to clap 👏, share and comment this article to enrich it thanks to your advice and feedback! I can’t wait to hear back from you and to know whether this article has helped you out.

Poool is a tech startup that aims at reshaping the way people access and finance content. Our products help publishers implement the best strategies for their audience. Want to talk about engagement, monetization and subscription with our amazing team? Let’s do it 😊 Learn more