(3) HCDE 210 Usability Test

Trying out a Usability Test:

This week, I experimented with usability in a team of three in order to plan and execute a usability test for a common household item: a microwave.

usability (noun)- the effectiveness, efficiency, and satisfaction with which users achieve specified tasks in particular environments

Prior to actually running the usability tests, we had to plan them! We brainstormed user populations, several microwave functions, data types we could collect, and tasks that users could complete. It’s important to consider these factors prior to executing the usability tests so that we can be sure that we’re testing integral parts of the microwave. If we don’t, we might end up ignoring some usability issues that may cause problems for future users. Also, different user groups might respond to a usability test differently. If we asked a group of 5 year-olds and a group of 30 year-olds to use a microwave, their experience level would significantly impact the results.

These are all things that can affect how someone uses a product. If users aren’t satisfied with certain aspects of the product, they won’t want to use it anymore and will likely find a replacement.

Then, we executed the usability test on 3 different people. Although most people had already used a microwave before, there is slight variation between companies and products, which creates interesting results. The specific microwave we used was the General Electric Series 2.2.

The General Electric Series 2.2 Microwave! It’s important to be specific about what microwave we’re using because the data we collect can’t be extrapolated to other microwaves, which might have different functionality.

Our test assigned the user 3 tasks: set the clock on the microwave for 3:34 pm, turn on the microwave for 1 minute and 20 seconds, and clean the turning plate. We felt that these tasks allowed us to observe how the users uses the product for a wide variety of functions that a typical user may do. The data we chose to record for each task was duration (time), satisfaction in the process (Y/N), and difficulty of the task (1–5). These different data types helped us understand where the user had trouble, and whether the user was still satisfied after running into a problem. The users we chose were college students because those were the types of people we had easiest access to. We took turns being the moderator, notetaker, and timer for the usability tests that we ran.

The questions each piece of data that we collected addressed. It’s important to have more than one type of data so that it’s easier to understand how the user feels about the product.

After that, we summarized our results in a presentation:


This was a good way to review the data, reflect on our users’ experiences, and draw conclusions. There’s little to no point to collecting data if it isn’t going to be analyzed. Analysis allows us to understand what design aspects of the microwave need to be adjusted.

Hard at work at creating a presentation on our microwave usability test that’s easy to understand.

Reflecting on my Usability Test Experience:

We ran into a couple of problems as we tried to run our usability test. For one, we had asked a couple of our friends to participate, but not realized that their availability was variable, so we ended up having to ask other friends to participate on short notice. In the future, we would probably set a specific time for all participants to show up so that we can run the tests on a more timely schedule. Also, we ended up adjusting the test script in between each usability test because we realized what wording worked and what wording didn’t. This made me think of a couple questions related to the role of moderator. If a moderator finds that instructions were unclear or confusing to the user, do they stray away from their script and damage the controlled environment of the test, or do they just not use that data? How important is keeping everything the same between each usability test? Does saying something one way have a significant influence on how a user uses a product? If so, how do the people who run the tests know which data to trust?

Looking Toward the Future:

I could see using this strategy in almost every single aspect of design. Because most products have some sort of human-interaction element to them, they must all be tested by people to make sure that they’re usable. There’s really no point to making a product that can’t be used successfully. If I ever create an app in the future, I will definitely have people test it out to make sure they are satisfied with how it functions. But, there’s also a time and place for usability tests. I don’t think they should be used for very early-stage prototypes. When you’re still fleshing out your thoughts, running a usability test on an unfinished idea might cause people to focus less on the function and more on the design of the product. In the end, an important distinction to make is that we are never testing people or their abilities. We test the usability of things. At all points of design, we can (and should) assume that the user may use our product incorrectly (by our standards at least) and we must take that into account in order to further improve the product.

Like what you read? Give Olga Andreeva a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.