Just like how vaccines need to be tested before mass distribution, we better test out our design before deploying it for mass usage. What better way to do this than to conduct some usability testing?
What is Usability Testing (UT)?
UT is a practice of testing to see if your application is easy to use for a group representative of users. This is done by observing users complete a series of tasks known as scenarios. It can also be done for multiple designs to test any alternative designs.
Why bother conducting UT? Because they can expose design flaws you didn’t even know were there. Observing how users complete or even fail your scenarios can give you valuable insight on your design. This is because real users, who has no prior knowledge of how your app works, can show you some overlooked flaws that causes the flow of the UI/UX to be unclear.
Now that I’ve hopefully caught your eye on why you should have a go at Usability Testing, here are the simple steps:
Define what parts or aspects from your design you want to test. Is it the navigation flow? correctness of which button to press? With a clear plan, the UT will flow smoothly
2. Define Scenarios
Designing basic scenarios is quite simple once you know which part of your design you want to test (step 1). Here are some points to look out out for when designing your scenarios:
- Determine and prioritise the core functionality (this mostly depends on your application’s features). After doing that, turn those functionalities into tasks, e.g., searching for a location in a map application.
- Clearly define the scenario of each task; what should the user know already? (could even be absolutely nothing), what should the user achieve? Describe the user’s current situation. These are mostly known as the assumptions of a scenario.
- Define a realistic goal and success indicator. What is the stopping point of a scenario? what makes it successful or a failure? (e.g., user finds or does not find a certain page).
- Prepare questions to ask the user during and after finishing the scenarios. These questions should aim to find out what their opinions, thoughts, feelings are in order for better analysis on what needs to be improved. (e.g., How does the layout of this button feel?).
This step is best shown with an example implementation later on.
3. Choose Testers (Users)
You should know who your target group is. What kind of user are you looking for. Age? occupational background? gender? education level? living area? or a combination of multiple qualifications?
From this source, this graph shows that 5 testers are more than sufficient to conduct an insightful usability test:
4. Conduct (and Moderate) Tests!
Finally, after having your scenarios, questions and testers, the test is ready to begin! This is where you should be most observant and be aware of any issues, such as user mistakes, misdirections, etc. Here are some points to look out for when moderating the test:
Note when and where they made mistakes. Pay attention to their body-language and facial expressions to further analyse confusions or confidence. Document all these information which you can do using several methods: notes, voice record, video, screen-recording.
- Minimal to no guidance
It is also important to not guide the users as they go through the scenarios. This is because we want to find out whether our app is well-designed enough for real users to use without any guidance. If possible, it wouldn’t hurt if the user can say out loud what and why they’re doing as they go through the scenarios.
After the scenarios are completed, it would be good to ask the users for their overall opinion, see if they have any critics or suggestions. This way, you can have better insight on what is already well-designed, and which parts needs a potential redesign.
Nothing matters when no analysis is done to improve your design. Here are some things you could assess from your results:
- success / failure rates
- number of clicks or tries the user took
- moments of confidence / confusion
- user response and especially critics & suggestions
- patterns (of mistakes and confusion) among the users
Measure and take these into consideration and discuss what needs to be improved. Was the fill form page’s location obvious to the users? were they confused or even failed to find the page? How many users took too much time and/or tries to find it? Focus on parts where the most confusion and mistakes happened.
After analysis, you can make a report. This states the results of the test. Final analysis notes, success status, conclusions and suggestions are put here.
Finally, as with all kinds of research, there isn’t really any one size fits all when it comes to tips and tricks. There are bound to be differences in circumstances, but hopefully this article can give just a better idea on what is done in Usability Testing.
Implementation in my project
Planning and Designing Scenario
Since the core functionality of our bisaGo application so far is to login, search for a location, open a post of a facility, add a post and add a location, our scenarios are for the users to do exactly that. Those core functionalities are turned into tasks, and from there we define a scenario for each task. Here are a couple of examples:
The scenarios are simple and are given proper assumptions needed to complete the task. They are also based on the core functionalities. The goals and success indicators are realistic with a defined stopping point to make the scenarios purpose clear.
Next, some feedback questions are prepared for after the scenarios are completed. These questions are to further gain insight on the tester’s thoughts and experience while using the app:
When choosing the users, it is best to actually find testers who are people with disabilities since the application’s main target are for them. However, due to the current situation, our PO has granted us alternatives. We stuck to the recommended number of testers of 5. For my part, I did UT for 3 out of the 5 users. (Hence, the following UT report will be based on those 3 first, since not everyone has posted their results yet).
Conducting and Moderating the test
For this, I thought it would be best to do a live test instead of through Zoom. So I made sure it was someone I trusted and have been tested negative for our ‘beloved’ virus. This way, I get to have a better observation of the user’s reactions, confusions and decisions.
As the user was going through the scenarios, I documented the test progress with the method of taking notes:
Here is an example of a scenario completion. It is clearly shown that any mistakes, even of the user making a typo, is documented. The success indicator is clear to determine the scenario’s success and since the user completed the scenario with ease, there is minimal notes. I also asked the user to sort of narrate as he was completing the scenario, and he only expressed concern when no results were shown for his typo-ed input. But, he quickly realised that mistake and eventually found the location with ease.
Then I asked the prepared feedback questions:
Turns out, most of the answers are similar, stating that the flow and design of the application was already clear and concise. However, it lacked the aspect of attractiveness. Another said that the login and register page is a little too hidden.
Next, I formed a report based on the 3 users I interviewed and tested:
evaluation reference: Rubin J. & Chisnell D., 2009, “Handbook of Usability Testing”, from: http://ccftp.scu.edu.cn:8090/Download/efa2417b-08ba-438a-b814-92db3dde0eb6.pdf (accessed January 10, 2021)
What to keep up
- Moderating the test: My attitude when moderating the tests remained neutral. In the book, this means I remained impartial to any of the design in the scenarios, whether I approve or disprove it. I also did not guide them at all during the test, the book also mentions this is important. Even, when mistakes are made, the testers have to find their own way and receive no hints nor guidance from the moderator.
- Test questions: The test questions and the post-test “questionnaire” (as the textbook reference calls it) are clear, relevant and easy to fill out since the testers had almost zero troubles when asked the questions. The answers they gave also gave valuable insight which were useful in determining how well-designed our app was. As seen below, here is one of the response for the post-test questionnaire:
- The test itself: The tests has been prepared with realistic scenarios along with “bogus data” (as the textbook reference calls it) to make the test seem more realistic to real-world situation. It can be seen below that the example from one of our scenario have bogus data which is the Margo City location and some bogus data of facilities are also included for the tester to tap and view:
What can be improved and how
- Finding the testers or participants: Our application’s aim is to help people with disabilities. With more time, we could have searched for such testers. Additionally, those testers could be people of various disabilities and covers the 4 types of disabilities we have in our application, which are physical, intellectual, mental and sensory disabilities.
- Moderating the test: The test could have been recorded to have a platform for reviewing the test scenarios better. This could have been either through video, audio, or screen recording of how the testers gone through the scenarios. Additionally, in the first 2 of the 3 tests I moderated, I did not ask the testers to “think aloud” as the textbook recommends. This can be improved by asking future testers to think aloud as they go through the scenarios.
- Debriefing the testers: In the textbook, this is another step after having users fill out post-test questionnaires. I did not know about this before and hence, did not carry out this step. As the book recommends, this step is to further gain insight, whether improve the obtained ones or gain possibly new unseen ones. This can be done by simply reviewing the post-test questionnaire with the participants, review back any notable mistakes, and giving an additional chance for them to express any thoughts they might be having regarding the whole test procedure.
Thanks for reading!