A6: Wizard of Oz Prototyping

Controlling Spotify Playlist through Gesture Recognition

Team members: (Hailey) Xiaochen Yu [Operator/Wizard], Nathaniel Tabit [Moderator]; Rashmi Srinivas [Notetaker]; Juan Cai [Videographer]

For this assignment, we had to create and test out a scenario where the user thinks they are testing out a functional program/interface, but in reality there is an operator or “wizard” performing these operations behind the scenes.


Idea: We decided to test out gesture recognition for a music player, such as Spotify or iTunes.

Technology: Using Spotify Premium, we were able to control Play/Pause/Next/Previous on a tablet through a laptop. We also required a stable WiFi connection.

Scenario: Our group posed as a mix of Computer Science and HCDE students who had created an application that turned the tablet camera into a sensor. This sensor took in specific gestures and, combined with the Spotify API, allows the user to control Play/Pause/Next/Previous on the playlist. Before turning in our project, we were required to film some user tests.

Setup: We needed to find a place where the operator could easily see the user, moderator, and tablet without being seen herself. Odegaard library had limited seating, but we eventually found a secluded study room in the Allen Research Commons.

The Moderator stood in the front of our room asking people if they were willing to be a part of our usability study, and once we had a user the Moderator stood right beside them. The tablet was placed on the end of the table facing the wall opposite to the room opening, such that the user was positioned with his/her back toward the entrance. The Videographer stood so that the user as well as the tablet screen were visible in the same frame while standing close to the Moderator so that his voice could be heard. Meanwhile, the Notetaker sat in the front corner of the room and the Operator sat with her laptop in the back corner of the room, positioned behind the user.


Although we ultimately “fooled Dorothy”, there were some key takeaways from this project that could improve future similar projects.

Train the Users: In the first round of our user testing, the user started waving his hands randomly after we told him that this music player has a gesture sensor. Of course, the user expected the Spotify interface to react accordingly, however we were not prepared for this behavior. Next time, we should have a prepared training session before sitting the user down and teach them the specific gestures we want them to perform. Furthermore, we should also create a task-oriented user test versus allowing the user to do whichever motion they want.

Change Gestures: While we did have a guide for which gestures we wanted to use, we found from our test that these gestures were too similar, which made it difficult for the Operator to distinguish between Pause/Play and Next/Previous. Moreover, due to the limitation of Spotify Player premium, we did not figure out how to control the volume of the music remotely. However, the control for music volume seems to be a very easy task to do if we can control the switch and play of music. It would be better if we can find a way to control the music volume remotely so that we can convince users that they can control all the aspects of this device simply through their gestures. Another way is to tell users that we cannot control the volume today due to some hardware errors, which makes it look more like a real project.

Like what you read? Give Rashmi Srinivas a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.