Usability Testing

Planning and executing a usability test of a microwave

On January 18th, 2017, a team and I set out to test the usability of a microwave, with hopes to reassess the effectiveness, efficiency, and satisfaction with which specified users could perform specified tasks with said microwave.

Ideation of relevant microwave features

To design our usability test, we began by brainstorming both specific features of a microwave and the corresponding usability issues that might arise with each feature. After careful contemplation, we settled on three tasks a user might perform with such appliances — setting the clock (to 2:37PM), removal and reinsertion of the tray, and microwaving an item (a potato) at a specific power level (70%) for a specific amount of time (1 minute, to be exact). We felt all three tasks were relevant to most users, even though many may be unsure how to perform them. Moreover, in order to sufficiently interpret the effectiveness, efficiency, and satisfaction of each task, we resolved to focus on the time taken for task completion, resulting user satisfaction rating, and methodology used to perform each task.

Designing a usability test with the help of peer feedback

After consulting with peers for feedback, we drafted a moderator script to ensure our users would know what we expected of them, and be completely comfortable all the while. We then recruited three anonymous participants —each individual was a college student (between the ages of 16 to 22), and thus prepared their own meals. We decided to engage with these specific users because they all frequently utilize microwaves, and would ultimately benefit from the improvement of one.

Finally, we were ready to execute our test. As we had constructed meticulous plan for how our test would run, the process was pretty straightforward: Moderate. Observe. Interpret. Repeat. Once our users each completed the three specified tasks, all that was left was to collect our data in the following presentation.

https://www.youtube.com/watch?v=OyU7xNmAg0I

Creative problem solving — navigating the setbacks of our test

Overall, planning and executing a usability test was a smooth process for our team. That being said, we encountered several concerns along the way — especially as three usability testing newcomers. Initially, our team struggled to agree upon which tasks we should ask each user to perform. Only after carefully considering questions such as “What tasks will best reveal the usability quirks of this microwave?” and “What resources will our participants have access to?,” were we able to come to a consensus. Moreover, the process of interacting with our users wasn’t without it’s obstacles. Beyond our struggle to ensure our users were informed and at ease, we had to quickly learn to make our (particularly shy) participants comfortable enough to voice their thought processes. That being said, to avoid skewing our time measurements with the extra commentary, we realized (perhaps a little to late) that it would be best to address reflections and emotions primarily after the task had been completed.

The applications and limitations of our usability testing process

Based off the successes of our usability test — and despite the setbacks — I imagine I will utilize this process quite consistently when assessing products in the future. Though brainstorming user tasks was a more timely and tricky process than usual, the practice proved it’s effectiveness during our actual test, as we were able to collect relevant and revealing data quite seamlessly. Likewise, taking the time to thoroughly understand each participant’s thoughts and reflections can only improve the outlook of a product — especially when a designer can often be biased about their own brainchild.

Usability testing, overall, seems like an imperative phase in creating any successful product — be it a dating app, calculator, sports car, or bathroom stall. Nevertheless, there may be some limitations to the application of usability testing where little to no human interaction is involved. For a carpet or ceiling fixture, for example, designers may not find much use for usability tests. As exemplified by the usability test of fruit, however, this process is almost universally relevant.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.