Wizard of Oz, Behavioral Prototype
Behavioral prototyping is a good way to explore a user interaction scenario. This technique is very effective in testing design assumptions for UX applications, when the actual technology is either not available or too expensive to develop during the design phase of a project.
In this behavioral prototype assignment, I had fun working on a chatbot project “Datebot” as a facilitator.
Datebot is an automated chatbot that uses text recognition from users chatting on a mobile device or computer. We designed Datebot on Facebook Messenger that helps people find a restaurant or coffee shop for their first date.
In order to make Datebot feels more like a real bot, we prepared a set of questions in Google Docs. The Wizards (people behind chat) copy and paste the questions into Messenger and acquire answers from users to make restaurant suggestions.
We orchestrated the evaluation session by having two group members playing the role of the chatbot to develop a response to the user. We gave the users a scenario (college student, limited budget, vegetarian) and had two team member documenting and guiding users through the tests.
Success was determined by whether or not the chatbot can make a useful suggestion for the user. We acquired data on how long it takes, how many suggestions the chatbot has to give to reach one useful suggestion, number of times the user doesn’t understand the chatbot’s question, and number of times the chatbot doesn’t understand the user’s response
There are several constraints of our design. The first one was the profile picture of Datebot. Since we had to use a real Messenger account, the profile picture of Datebot was actually the profile picture of Wizard’s Facebook account. Though it did not mess up the test, it still reveals the fact that we are just faking the design to users.
Secondly, since Datebot is only a prototype chatbot with no support from code or algorithm, there were some points during tests that our Wizard physically could not pull out the text quick enough, which made the users felt Datebot is not so “bot-like”.
In order to test our Datebot, we recruited 2 participants and gave them a scenario: college freshman, really low budget, your date is a vegetarian.
Summary of the results of our testing
In testings, we collected data from how long it takes, how many suggestions the chatbot has to give to reach one useful suggestion, number of times the user doesn’t understand the chatbot’s question, and number of times the chatbot doesn’t understand the user’s response.
Both participants had no trouble receiving restaurant advice from Datebot. They had different opinions about what worked well and what did not quite.
What worked well
Again, since both participants had no trouble receiving restaurant advice from Datebot, we believe that Datebot’s design was effective. The users reported that they think the Datebot sounds more like human than a bot, and they find it comfortable.
Also, since we had needed questions in Google Docs, the respond time of Datebot was quite short, which made the testing more smooth. One participant reported that she enjoys how fast the conversation moves forward.
What needed improvement
First of all, one participant reported that he wasn’t so sure about how many questions remaining in order to get restaurant advice. Since everything was done by Messenger, we could not design an progress bar, so we had “just a minute. I am currently working on it…” to make sure that users know the Datebot is still working.
Secondly, the name Datebot caused some confusion. During in class critique, we were suggested that maybe a better name would be “DinnerBot” or “RestaurantBot”.