A6. “You’re a Wizard Harry”
This week we were tasked to create a prototype tested using a a Wizard of Oz methodology. We were give four prompts to choose from:
- Applications for a voice-operated assistant
- Gesture recognition platform
- Vision-impaired navigation aid
- Chatbot or text messaging app
Our team chose to create an application for a voice-operated assistant and found an interesting route in the cooking. We thought there was a problem in the kitchen when cooking and trying to navigate a recipe and thus wanted to create a voice activated and visual cueing cooking app that would allow users to:
* Select a variety of meals
* Prompt the user with the ingredients they need for their chosen meal
* Provide the user with substitute ingredients, if needed
* Guide the user through each step of the meal
In order to recreate a Wizard of Oz style prototype we used an iPad that would act as the interface for the user to interact with. We then connected the Ipad via Google Hangouts and used screen share to bring up a webpage that would act as our application and interface. Google Hangouts allowed us to display our webpages and at the same time hear the user as they issued voice activated commands.
In order to respond we used an application called naturalreader that we could create not only preset audio commands but also type-in impromptu responses for any out of the blue responses that arose.
We had one wizard who was in another room conducting the responses and running the behind the scenes Magic for the application then, we had a facilitator help with the overall logistics and supervision of the the user as they performed the tasks. Finally we had a scribe who took notes and noted any difficulties as well as successes with the application and how the user was interacting with the the app.
For simplicity we provided our participant with a scenario and ingredients to actually make tacos and provide a close to real experience as possible.
Watch a quick video of our prototype in the works!
or visit our prototyped webpage:
This project was the first time we had ever done a Wizard of Oz style prototype and user test and it was seriously tough. The trick we felt was to be clever and find ways to trick the user in to believing this was a real application. During the test we had to change a bit of our methods for cueing and gathering information and so the scribe was not only taking notes but also cuing the wizard with information so that they could respond or know more about what was happening other than the voice commands. The scribe became the wizards eyes.
We had found that the user would respond in different ways and so our wizard had to create a lot of impromptu voice commands and or prompts for the user to follow. This could mean that we didn’t set hard enough ground rules in the start of the test.
*A direct feedback taken from our course’s feedback session was for our app was to create a method for the users to clearly know how to communicate with the app, for example the user must say “Google Abuela…” otherwise the application won’t respond. While this definitely would of structured the test more, I think it was interesting however to allow to naturally talk to the application as if it were a human being. This would all be assuming that the technology was progressive enough to learn human linguistic intricacies and language to respond in a humanistic way.
We used a “Siri” type voice application to create impromptu responses instead of our own voices to provide more of a realistic take that the user was not talking to another human being. The naturalreader voice however was extremely robotic and a time clunky in pronunciation.
From our post-interview and feedback gathering from our participant, they mentioned that a timer would of been helpful on the interface and screen so they knew how long something was being cooked for. Our instructions for the Taco recipe mentioned to cooke meat for 10–15 mins and the user would ask “how much time is left?” to which the wizard had to basically guess how much time was left. The participant also mentioned that they wish they could of simply seen all the steps at at once — yet this seems that it depends mostly on the user’s preferences.
It was clear that our participant figured out our shenanigans early on. They were familiar with the google hangouts interface but also we couldn’t hide the URL bar on the top of the screen-share interface in order to better “fake” the experience an application. For increasing the validity of the test and making the experience more real we should find a better way to hide the interface — so you can’t tell it’s a webcam application, it would also help to allow the user to open a “Cooking app,” and letting it start-up so it primes the user to believe that it is a real application.
We found some interesting things about human behavior while our participant was using our application. We learned that an cooking application can only go so far in enhancing cooking ability
(i.e it can’t make someone measure a teaspoon) and so it will by no means make you necessarily a better cook if directions aren’t followed. Therefore maybe there needs to be a way to improve peoples ability to understand and or follow a particular direction especially in how-to type applications. Lastly,
some participants are really shy to speak to a inanimate objects. I think there is still a hesitation and or sense of embarrassment especially when being watched to talk to inanimate objects. However, once the participant sort of figured out that this was being manipulated by other human beings it seemed that the embarrassment sort of melted away and the person became a bit more willing to joke around and or talk freely to the app.
Overall I really enjoyed this project although it was really difficult to pull off!