Sprint 2: Usability Testing

Usability testing on a coffee maker

For this week’s project, my partners and I were tasked with conducting a usability test on a coffee maker of our choice. Our test needed to measure the extent to which our coffee maker could be used to complete specific tasks with effectiveness, efficiency, and satisfaction.

First Steps

We began to design our test by first ideating the specific features that coffee makers could have.

White board where potential coffee maker features were ideated

This was followed by brainstorming the possible usability issues coffee makers could have, and, as always, the potential users.

Ideas for potential usability issues. Some examples include “poor coffee taste,” “hard to change setting,” “dispenser gets jammed” et. al.

At the end of the ideation process, my partner and I decided that we were going to preform our usability test on a newly installed Nescafé Alegria coffee maker at a UW sorority. We chose to do this coffee maker because our users, three girls living in the sorority, had little to no prior experience using this device.

Designing the Usability Test

Based on the ideated usability issues of coffee makers, my partner and I came up with three different tasks that our users would perform with our coffee maker.

Our coffee maker on which the usability test was conducted

We had to keep the limitations of the features of our coffee maker in mind when designing these tasks. Because our coffee maker was mostly automated, most of the tasks had to be interface related. These tasks were:

  1. Select a small sized drink
  2. Change drink size to medium and select a black coffee
  3. Reset order and brew a large hot chocolate

Additionally, we had to figure out the type of data we would collect. Seeing as we were measuring our coffee maker for efficiency, user satisfaction, and effectiveness, we decided to record the time it took to complete each task, a quantitative measure of difficulty on a scale of 1–5, and how many errors the users made while completing their task.

Finally, we wrote a script containing task instructions that the moderator would read to our users and prepared a data table for collecting data during the test.

The Results

The results of our usability test are shown in the following presentation:

Room for Improvement:

Though our usability test yielded fairly consistent results, there are aspects of our test procedure that I would change if I were conducting this usability test in the future. Because we gave the users only verbal instructions, the users thought the experiment was casual (perhaps even silly at times), engaged in conversation with us, and were sometimes distracted while performing the tasks. To resolve this issue I would try to establish a more professional environment by giving the user written instructions for all three tasks and make sure that there are no other people aside from testers and users in the room. Additionally, I would also record our usability test on video because its difficult to record extensive notes about the user’s actions when they are naturally moving through the tasks at a fast pace.

Final Thoughts:

Overall, I really enjoyed this project because it allowed me to experience what it would be like to conduct a usability test in real life. Having to think of potential tasks and types of data I could collect to test for the efficiency, user satisfaction, and effectiveness of a coffee maker showed me that we take simple interactions between users and everyday appliances for granted. When I conducted the usability test I realized that these interactions can actually be confusing and even frustrating if the design of the appliance is poor.

Show your support

Clapping shows how much you appreciated Nikita Kovalovs’s story.