UX design: how to conduct usability testing

Yuna Orsini
Poool Stories
Published in
16 min readAug 27, 2018
Cover image by Daniela Peñaranda

First things first: even if we often read about “user testing”, I think we’d better talk about “usability testing” 🤓 — obviously we’re not testing the user, but our solution!

There are already a lot of resources about usability testing; this is why, today, I’ll be talking about it in a pragmatic way, giving feedback and advice to anyone willing to dive into this fascinating activity. 🔧

Sometimes our solutions need some adjustments 😉

At which stage of the project should I plan usability testing?

Usability testing, which probably is the most common user research method, deals with projects of which a “solution” has already been given a tangible form, e.g. thanks to zonings, wireframes and, if the solution is not a draft anymore, even mockups, static or interactive prototypes, whether they be throwable or iterative ones. The main goal of usability testing afterall is to measure how well this material is easy-to-use. 💯

So, to me, usability testing applies at a key-stage of the project: when one can start seeing or using a concrete material. 🚀 If your project is not at this stage, there are many other user research methods that should be used before implementing a solution, for example: one-to-one interviews, focus groups, surveys, shadowing …

How many testers do I need in order to make my usability testing gets reliable?

It might seem that meeting as many people as possible then putting them in touch with our product would provide the best feedback about what works well and what does not …

In fact, it’s important to note that usability testing takes a lot of time to plan, run and analyze. So, conducting dozens of usability tests if you’re not able to spend time analyzing their results is a complete waste of time! 🙅‍ If this is the case and if your goal is to get quantitative results about your product, other user research methods exist like the surveys mentioned above.

In addition, after several usability tests, you’ll notice that you’ll start getting the same feedback again and again. 😱 Basically, the more you conduct usability tests, the less new feedback you’ll get. It’s kind of logical, isn’t it? A 5-tester sample is said to be the optimal size: with this, you’ll gather 90% of the most important insights and you won’t waste time dissecting the results. 🔥

Tom Landauer and Jakob Nielsen proved the number of usability problems found in a usability test with n users is N (1-(1- L ) n ), where N is the total number of usability problems in the design and L is the proportion of usability problems discovered while testing a single user. The typical value of L is 31%, averaged across a large number of projects they studied.

And when your 5 testers’ feedback has been analyzed and your solution has hopefully been optimized, you can (should?) iterate!

You can’t recruit 5 testers? You can be sure that 1 tester is always more valuable than 0 testers! 👍 This quote from Jakob Nielsen sums it up nicely:

Zero users give zero insights — Nielsen

The most striking truth of the curve is that zero users give zero insights. As soon as you collect data from a single test user, (…) you have already learned almost a third of all there is to know about the usability of the design. 💪 The difference between zero and even a little bit of data is astounding.

Finally, rather than asking yourself how many testers you should recruit, I think that it’s more beneficial to put your energy into the reliability of your sample: are your testers representative of your user target? 🤔

How do I recruit participants for my usability test?

You’ve probably heard of companies that recruit users for you. It’s certainly very comfortable but it’s also very expensive. 💵 It’s about paying a certain amount for each user, recruited by a specialized organization. Personally, I’ve never made use of this kind of service. Don’t hesitate to comment on this article to share your experience with me/us.

When you’re in a “startup mode”, you have to go fast and, more importantly, cheap. What I’d advise is that you draw on your own network of acquaintances 👋 and make some calls to:

  • A colleague that has not taken part in the design of this product
  • Friends and family
  • Any acquaintance whose profile corresponds to your target audience

As a company, you’ve most likely got access to the goldmine that is your social media presence. 🌎 Don’t hesitate to call on your Facebook, Twitter, LinkedIn or Instagram “communities”! They will probably come forward to help you because they’re convinced, kind or even curious. ☝️

At Poool, I’ve branded this testing experience and created “Poool’s Testers Group”. I invite volunteers to fill in a form and to join the group. Each new volunteer receives a “welcome & thank you” email that lets them know that we’ll get in touch as soon as a new test that fits with their profile is available. ✉️ And it works!

One of our “appeals to testers” on Poool’s Facebook page
Welcome page of our subscription Typeform

Don’t forget the European General Data Protection Regulation (RGPD) that applies to any personal data that you collect and process from your testers… ☑️ Collecting their consent in advance is mandatory. To do so, I use a “Legal” type question on Typeform.

”Legal” type question on Typeform

If you have to recruit users from scratch, without using your network, you can think about “guerrilla testing” also called “hallway testing”. The aim is to encourage people that you’ve just bumped into in the streets test your solution. 🏢 Depending on the context of your product, the location and the incentives you’ve planned, it can work and it can allow you to reach new audiences.

Finally, online tools for planning and observing remote tests exist. In the case of digital products, this kind of tools can serve two purposes: reach people unable to come to your lab or those you can’t go and visit 👀 + reach new people if the platform also offers user recruitment. ⚠️ Be careful in this last case as your test scenario has to be strong and you have to pay attention to the results provided by such a sample, that could be qualified of “pro in usability testing”. In other words, they could be biased.

How long does a usability test have to last?

Always remember that your testers’ time is as precious as yours: 🎁 they offer availability and involvement to help you and provide you with valuable insights… Don’t waste this resource and try not to make them wait or move for nothing.

When it comes to test duration, 60 to 90 minutes is said to be an optimal duration. ⏰ From my point of view, 60 minutes is already a long time for a participant to concentrate until the task is completed … I always try to make the “practical work” part fit into 45 minutes, which leaves 20/30 minutes left for an introduction and a conclusion.

Staying concentrate during a 60–90 minutes test requires energy

👉 It can be interesting to ask your testers what they thought about the test duration in your post-test questionnaire. It can help you adjust it for your next test sessions.

Extract of one of my post-test questionnaires. A tester once commented this question by saying: “It was not long, but it should not last longer” 😉

How do I prepare for a usability test?

The testing material

First ask yourself about the goal you want to reach with this test. Based on this, put in place the testing material that will help you reach it. 💭 Want to choose between 2 product scenarios? Zonings are enough. Want to check whether a screen is user-friendly or not? A wireframe should be perfect for this. Want to validate the attractiveness of a graphic design? A mockup is what you need. Want to test the user flow? An interactive prototype would be ideal.

Tools like InVision help you quickly implement interactive prototypes

The test plan

Then move on to the test plan. ✍️ You‘ll need to write a whole scenario which your participants will dive into to achieve the requested tasks. In my opinion, this is the most sensitive stage because each order you give must respect a precise balance: ⚖

  • giving enough context to participants to nurture them, without giving too much of it, not to risk overwhelming them with information they won’t integrate or take in

💡 For example: “You are looking for new shoes to run in the forest twice a week” is enough to give some context to your participant. There is no need to invent a precise life for your participant like you would if you were writing a persona. On the contrary, the specific expectations of your tester are interesting to discover.

  • giving an order that is enough detailed so that your participants don’t get lost, yet ensuring that you don’t guide their actions strictly which would create a big bias in the test

💡 For example, use “You decide to buy these shoes” over “Click on the big blue button on the right to add the shoes to cart”.

Task wordings have to be readable and printed on cards 🔍

Number each task and print it out in big on a reinforced card. Make sure tasks are not too numerous. I will talk about this later.

Observation sheets

Then, you have to prepare your task observation sheets. 🔍 For each of your orders, create a document that will allow you to follow the achievement of the task: 📄 write down the number and the wording of the related task, the expected actions and the questions to ask. Insert free spaces dedicated to the input of the results and observations: success rate (0, 50 or 100%), time on task (this indicator is not necessarily relevant depending on the task), occurence and seriousness of the observations…

Print as many copies as the number of existing tasks.

Pre-test and post-test questionnaires

Also prepare your pre-test and post-test questionnaires. These quick surveys are very useful.

☝️ Pre-test questionnaires have 3 big advantages:

  1. have participants wait if you need to make a few last adjustments or finalize some details before the experiment starts
  2. make the participants relax with this first quick and easy activity that immerses them softly into the session
  3. better qualify your participants’ profile in relation to the context of the tested product (for example, at Poool, this is when we ask the testers whether they are frequent readers of the press, where they read it, if they use digital devices …)

☝️ Post-test questionnaires let you complete the observation of the practical work by writing down the participants’ feelings. This is the moment to question them about what they thought of a particular process, color or feature … 💬 It’s funny to see how participants sometimes struggle with a task but then consider it as completely seamless in the post-test questionnaire, or the other way round. These interesting details allow you to nuance the gravity of a friction and the general result synthesis.

Testers really appreciate these questionnaires and they are very useful from your side to have a better understanding of the results.

This can also be the occasion to get the participants’ opinion about “off-test” items, like a graphical variation, other research ideas or, like I wrote above, how the test ran.

The test of the test 🙃

No, it’s not a UX joke! Always test and time your test, because — I promise — you’ll inevitably forget something. Someone from your team will kindly offer some help to test your scenario with you. 🤗

Your colleagues can help you test your test 😇

👍 Testing the test allows you to adjust its duration and eventually the number of tasks to achieve, to correct wrong task wordings or unclear questionnaire items, to set up the material differently, to remind you of the key moments when you must take part …

Location and testing equipment

You can do what you want. But, if possible, make the most of the testers’ environment. For example, the place they live or work, their digital devices, etc. In doing so, they are more confident and can achieve their tasks like they would on their own. Moreover, using their own equipment lets you find out use cases that you may have never seen in your office. “Damn, it looks like that with old smartphones? Well, this looks different on Edge…” 😉

However, we all know that it’s not always possible, nor easy, for us, as for the testers, to run the test in their environment. If this is the case, you can invite them to come to your office or even meet in a place that you could rent for the occasion. You could even refit this space into a useful “laboratory” where you will be able to observe the experiments. 💻 In any case, you must make sure you have backup equipment to run the test no matter what.

The “equipment” topic does not only relate to computer hardware. Don’t forget to also take paper, pencils and any complementary material required for the test to run smoothly. As part of some of my tests, I printed out fake confirmation text messages onto mock phones and created fake credit cards made out of cardboards. These small details help to make the test more realistic and run smoother.

False text messages, false credit cards … these are some UX designer tricks to run realistic tests 🤓

How does a usability test take place?

Invite your participant

Once your participant volunteered for the test session, you have to find a time slot. This will be your first interaction with them so build a simple and warm relationship. Thank them for helping and try not to unveil the exact topic of the test, even if you can communicate its overall purpose. For example, so you don’t create any bias, don’t say “You’re going to test our brand new mobile app for innovative networking” but “Your participation will help us a lot to make our next digital product better”. Reassure your participant about the test duration and encourage them not to look for any further information about your company or your products before the day of the event.

Don’t forget to thank your volunteer— but make sure to do it gently 😁

Welcome your participant

The testing experience has already begun when your participant arrives (or when you arrive at your participant’s place). Don’t make them wait and seat them somewhere comfortable where they can begin the pre-test questionnaire.

Don’t hesitate to offer a hot/cold drink and/or a snack. I had read a good piece of advice somewhere that said it was important to offer this to them a few times or leave them to help themselves as, most of the time, testers won’t accept the first invitation for a drink or snack.

Present the test

Once the pre-test questionnaire has been filled out, I would recommend presenting the goal of the session and how the experiment is going to take place. As a helping hand, here are some sentences you can use, in any order you wish:

  • “Thanks again” 😜
  • “There will be 3 stages: the pre-test questionnaire already filledin, the experiment then the post-test questionnaire”
  • “It will take about 60 minutes”
  • “You will complete several tasks, one after another: the idea is to see if they are easy to achieve or not and ow we can make it easier”
  • “We are not going to help your achieve these tasks and we won’t take action, react nor answer your questions during the experiment. But don’t worry about this as we’ll discuss it straight after the test!”
  • “We’re going to write down a lot of things, and that’s perfectly normal.”
  • “This is only a test, nothing’s saved, whether it be a subscription, an order, a payment, or anything else.”
  • “We’re not testing you, but our solution”
  • “Please be honest. We won’t take anything badly. Your feedback will help us anyway.”
  • “There are no right or wrong answers”
There are no wrong or right answers

👉 These words aim to reassure the testers who, even if they don’t tell you, are probably a little bit anxious.

Observe the task achievement

I highly recommend observing the completion of the tasks with a partner. 👥 It’s a lot easier and will lead to a more complete result. For example, one of the two people can concentrate on leading the test and making the scenario progress while the other can dedicate him/herself to observing how the participant reacts, verbally or nonverbally. Being two allows you to dismiss interpreting mistakes while analyzing the results.

Two pieces of advice for this observation phase:

  • 🤓 preserve an observer’s neutral and external attitude: you’ll become a pro of vague but encouraging “uh-huh”s and “okay”s
  • ✍️ write down everything on your observation sheets: blocks, hesitations, comments, movements … and other quantitative requirements like success rate or time on task, even if this metric is not always relevant to your project.

From my point of view, I find it quite difficult to write everything down without degrading the fluidity of the exchange. With time, you’ll have to find the right balance. You can consider recording an audio and/or a video version of the screen and the whole test 📹 (with the tester’s consent of course), but you’ll need time to analyze hours of multimedia material generated.

Thank your participant

Once the “practical work” is over, the post-questionnaire is filled in and the final exchange about the participant’s feelings is done, it is time to bring the session to a close. ⌛

At this time the “incentive” question often pops up to mind: do I have to pay him/her, to offer him/her a gift or something? Well, it’s your decision. If you have the means to pay your testers, you are very lucky. But make sure that your testers have not only come by interest and that their remunerations haven’t influenced their answers. If you can’t pay them, perhaps you have a small “gift” budget that would probably please them? 🎁 Here, the danger is to fall into the trap of useless goodies that create waste. So it’s time to be creative and build bespoke digital attentions!

Gifts definitely make people happy but pay attention to branded goodies; most of the time they are useless but, on the contrary, they always create waste 🙂

Apart from that, don’t worry, your thanks and your eternal gratefulness will probably be enough for your volunteers, who answered the call because they are kind before anything else. 💝

How do I analyze the results of a usability test?

I don’t think there is any universal way to process the results of your usability testing. It depends on the complexity of the tasks to complete, on their identified success criterion and also on the number of testers involved. 🤔 I guess it’s difficult and hazardous to adopt a statistical approach on a small 5-people sample

With a more qualitative than quantitative approach, which I personally feel more comfortable with, here is some advice I can suggest to you first.

First, write out all the feedback, with no exceptions. Associate the seriousness and the number of occurrences of each piece of feedback. Then you will be able to sort them according to priority. In project management, there is an interesting concept called risk evaluation. My courses in this were a while ago but, from what I can remember, the R = P x S formula, where R represents the risk criticality, P its probability and S its seriousness. This formula applies quite well to optimization prioritization. After all, it sounds like common sense. 🙂

Don’t forget to link gathered feedback to testers’ qualified profiles, in order to nuance some results if needed. Have a talk with your partner to make sure you share the same vision. 👥

Working with a partner is always a good idea to avoid interpreting mistakes

How do I share the conclusions to usability testing?

In the end, communicating results is a key point of the usability testing approach. 📣 Why invest all this energy if the final data is not shared with the whole team to turn it into advantageous actions?

Results can take the shape of either qualitative or quantitative feedback. Presenting them to the team allow you to involve your colleagues into the optimization process, to step back and think together about the product and to raise awareness among the team about the concept of user empathy. 🧗‍

In concrete terms, at this point, you have a list of feedback in hand, sorted by their process priority.

  1. Start with expressing the most positive feedback. I’m sure they are numerous, and most importantly, they matter. They have to be shared.
  2. Next, communicate sticking points, the ones you must solve for the project to be viable.
  3. Then share less critical feedback, the areas for improvement that you must remember and monitor, but that don’t threaten the project directly
  4. To finish, you can announce neutral feedback if you wish, the observations that don’t call for action because they are too specific or not necessarily negative …

To share all of this, a color coding can be interesting. 📗 Green, 📙 orange, 📕 red: it’s easy to understand, and, more importantly, it’s more pleasant to see that red feedback also live with green feedback.

Result sharing should not mean sermonizing. The team is here to discover the results, take them into account and, by the end, to find solutions. Altogether. This is the moment when the testing scientific method matters: don’t forget to explain the strict testing process, to illustrate feedback thanks to screenshots, quotes or statistics … The team needs to understand that it’s about users’ feedback, not about yours.

Test results are factual, almost scientific. They allow the team to agree on non subjective actions.

Usability testing … to be continued?

After your user test, I can’t recommend strongly enough to get your solution optimized … and to test it again! 🎉

Watch out for a second usability test!

You get it, you must learn and respect good practice in order to conduct a relevant, efficient and enjoyable user test. However in my opinion, usability testing deals more with empirical skills and reflexes. So don’t hesitate anymore and go for it! 🏃

If you’re interested in user testing, I highly recommend that you take the Interaction Design Foundation’s certificate “Conducting usability testing”. It gives a very qualitative approach to usability testing and will allow you to educate yourself on the subject!

This organization offers professional and complete courses about numerous topics of UX design.

Don’t hesitate to clap 👏, share and comment this article to enrich it thanks to your advice and feedback! I can’t wait to hear back from you and to know whether this article has helped you out.

Poool is a tech startup that aims at reshaping the way people access and finance content. Our products help publishers implement the best strategies for their audience. Want to talk about engagement, monetization and subscription with our amazing team? Let’s do it 😊 Learn more

--

--

Yuna Orsini
Poool Stories

Freelance #UX designer · User research, strategy and service design · A half of http://team-ux.com · #DesignThinking advocate