HCDE 451: Behavioral Prototype

Introduction

In the sixth week of prototyping class, we were charged with setting up and documenting a behavioral prototype. Behavioral prototypes are also called “Wizard of Oz” prototypes, thanks to this famous scene:

The object of a behavioral prototype is to gain feedback on a design idea which we have envisioned but not actually implemented. This is useful when the technology is either not available or would be too expensive to develop in the early stages of a project. However, to gain meaningful feedback, it is imperative that the product appears real to the user.

Description of Idea

There were three suggested design ideas for this assignment, including a speech-to-text program, a gestural user interface, and an electronic guide that would allow users to navigate through a room without the use of their eyes.

My team chose the gestural user interface, pinpointing Netflix movies as an interface which could pair well with gestures.

Design Process

The first thing our team did was identify our design goals. Once we had established a few gestures, we wanted to find out if those gestures felt intuitive to users, if they were easy to identify for our wizard (because if a human could not differentiate between gestures, could a sensor do better?) and if any of the gestures were frequently done wrong.

When we first discussed our project, we wanted users to be able to control the entire netflix experience with gestures. This would include searching for and selecting a movie. We also planned to have a cheat sheet to guide the user through the interactions.

Unfortunately, as we began to plan out the equipment and logistics for our test, we realized that allowing the user to browse netflix as a whole might ruin the illusion we were trying to create. The simplest way to control the movie on netflix was to connect a tv to a computer and use keyboard shortcuts from there. However, there are no browsing shortcuts for netflix and the operator would instead have to browse with a mouse, in which case the cursor would be visible to the user. Not only would this be difficult to line up with a user’s exact hand movements, but a cursor might tip the user off that this is just a normal desktop interface.

We discussed playing the movie from an ipad instead of a laptop, as netflix has a touch-friendly app which might be better for gestural movie-browsing, but we realized that it would be much harder to disguise a “wizard” on an ipad than a wizard “taking notes” on a laptop. Furthermore, tablets do not have HDMI cables and we did not have a smart TV on hand, which meant that we would still need to have a laptop connected to the TV to mirror the tablet display. Once again, an extra complication which could ruin the illusion.

Finally, we discussed having the user gesture while holding a wii controller or similar, thus giving reason to have a curser on screen, but we felt that if a person had to find some small handheld device to control their movie, the value-added for such a system would be lower. After all, at that point, why not just use a controller?

Ultimately, to avoid these questions and test some of our preliminary gestures, we removed the browsing component of our test and zeroed in on controls for the movie itself.

Since one of our project goals was to test how intuitive our gestures were, we also scrapped the idea of adding a cheat sheet. We hypothesized that if the gestures were intuitive, the user would be able to learn them quickly from one demonstration. We also worried that static images might be more confusing than informative in a cheat sheet designed to illustrate movement.

User Test

On the day of filming, we recruited a random passerby as our user. There were no specific criteria we were looking for in a user but we wanted to find someone unfamiliar to us so they wouldn't suspect our shenanigans.

An illustrated image of our camera, computer, and tv setup is below.

As you will see in the video, Natalee, our operator, was manipulating the movie using keyboard shortcuts on a laptop connected to the tv, but she was hiding behind the “curtain” of a word document, where she was pretended to take notes.

I was our scribe, who was actually taking notes, and I was also helping to moderate, as I demonstrated the gestures for our user. Our camera 3 fell down during our filming, which is why I am not visibly speaking in the video, despite the fact that you can hear my voice. We had to redo my demonstration at a later date.

Our edited video is here:

Our final video demonstration

Due to time constraints, we did have to take out some of my favorite parts of the test, however, such as the full force of the user’s awe, as seen below:

ADD Clip of her asking how we did it repeatedly.

Analysis

Although we only tested with one person, we felt that we learned a lot about our design from this experience.

One thing which became apparent in our test was that our user often wanted to do more than one task at a time — even when only one was requested! Thus, she occasionally performed more than one gesture without bringing her hands back to a resting position, and our operator had to decide whether to have the interface respond to both gestures. The operator decided, in this case, that “the system” would recognize her intent but this is undoubtedly the kind of thing that would need deeper discussion on a design team.

In a similar vein, we never established whether there should be a way for a user to indicate that s/he was about to start gesturing at the tv. In our test, the system only responded to our user, but in reality, would such a system have responded to my demonstration, or to a passerby’s hand gestures as s/he spoke to a friend? What if two people watching the tv gestured at the same time? This would also have to be clarified during the design process.

As for ways our tests could have been improved — because the movie was so long, we told the user to stop fast-forward and rewind actions “whenever you feel like it.” In the future, we believe it would be wise to choose a shorter film and ask the user to rewind or fast-forward to specific points to test the accuracy of our system. Similarly, we would have liked more accuracy with our volume controls (the keyboard shortcut only has 3 settings).

Lastly, our test was very controlled. While this was good for testing our own ideas, we think it might have been a good idea to let the user form their own gestures (to tell us what they think feels intuitive before using our system) and to let the user freely interact with the prototype at some point before the directed tasks. In our study, we gave her the option to freely interact after directed tasks, but she was more interested in talking to us about the system than continuing to play with it.

Despite all the room for improvements, the user gave positive feedback regarding the intuitive nature of the gestures and seeing her interact with the TV informed us on how our chosen gestures could be improved. For example, our start/stop gesture was akin to grabbing something mid-air, which means closing the hand. Our user instead opened her hand to end with her palm open every single time. This is an indication that our start/stop gesture needs to be revised. Similarly, the user sometimes gestured upwards twice for a single volume control, which meant that she was stopping and bringing her hand back down between the two upwards gestures. The way we had envisioned the system, this likely would have brought the volume up, down, and then up again, rather than up and then up some more.

I had a lot of fun with this prototype and I can see now how invaluable it can be during any conceptual design process.

Show your support

Clapping shows how much you appreciated Nicole Tilly’s story.