HCDE 451: Wizard of Oz Prototype

Description of Prototype

Our group decided to design a motion control system for Netflix. We decided that the prototype would focus on the media player for Netflix since introducing navigating capabilities could easily fool the participant. In order to design the prototype, we reserved a booth at the Allen’s research commons at UW. The booth had some couches that helped simulate a home environment, and a T.V. that could connect to a laptop and display its screen. We placed a Wii motion sensor at the top of the T.V. to act as a real motion sensor in order to fool the participant. A laptop was connected to the T.V. and its display was not mirrored so that the T.V. would show a movie on Netflix and the actual laptop would be displaying a word document of notes. In terms of how we orchestrated our evaluation session, we had the participant sit on a seat in front of the T.V. screen. Our operator sat at the side of the participant pretending to take notes, but in actually, they would be using keyboard shortcuts to control the media player on Netflix while the participant performed gestures. A moderator would be talking to the participant and give them a set of tasks to do, while a scribe would be documenting notes and footage.

Booth where test was conducted
Left: Wii Motion Sensor, Middle: T.V. with sensor at the top, Right: Wizard pretending to take notes on a laptop


In terms of worked well, I feel that the design of the prototype went well. Projecting Netflix onscreen while using its keyboard shortcuts worked well into making the system very responsive for when the participant did a gesture. In addition, using things like the Wii motion sensor made it a lot more believable for the participant. Taking into consideration of all these design factors that went into the prototype, our participant truly believed that it was a functioning prototype. In addition, we prepared a script that thoroughly outlined our introduction, tasks, and post-interview questions. Preparing a script and also piloting it numerous times really helped in actually smoothly performing the test with professionalism, which boosted the believability of the test.


In terms of what needed improvement, we felt that we should take into consideration for continuous gestures. There were times when the participant did one gesture and quickly moved to another. Hence, practicing and figuring out how to handle that would help. In addition, creating more stricter tasks for fast forward and rewind operations. During the test the participant moved their hands quickly across, but the media player could only fast forward and rewind at one speed. Therefore, taking into consideration how slow the fast forward and rewind operations are, figure out how to make those gestures feel more believable. In addition, we thought that we should also take into consideration on what happens when there are two people gesturing at once or when the moderator makes a gesture alongside the participant. We thought that introducing those cases would also benefit the believability of the system.

In terms of what we concluded about the effectiveness of the design, we found that for the most part, the participant really believed that it was a real working system. They were amazed by the responsiveness and mentioned how embarrassed they were when they found out it was a fake. There were smaller issues pertaining to the effectiveness of the design. For example, the participant did subtle changes like opening the palms out when they wanted to pause, but they actually needed to close it. Instead, we responded with a pause anyways. The participant also tried too many gestures and we did not respond as fast, but excused it as delays. Hence, even though they still believed the system, it was not the most effective with some edge cases.


Like what you read? Give Kevin Huynh a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.