Week 13: Developing and evaluating the final concept
Carnegie Mellon University
Graduate Interaction Design Studio 2
Spring 2018
Over the weekend, we took a stab at developing some ideas for individual moments of interaction in the overall concept. We got together on Sunday to share with others ideas and the specific interactions to be tested.
Together, we plotted out the concept again and discussed how do we visualize those moments (fidelity/model/testing method). We concluded that some of them could be tested well with wireframes with and some with a chatbot (Wizard of OZ technique). We split up into two groups and started developing the prototypes for testing.
Chatbot Wizard of Oz Testing
Scott and I worked together on the chatbot and we developed 6 different scripts and tested them out with as many people. Some of the questions that we were interested in exploring were:
- Do users prefer the system to have personality? A.Personality B. System
- Does the user prefer to A. establish milestones before and then get a mentor or B. choose from options for a mentor and then form milestones based on their choice of mentor.
- Method for setting milestones A. Establishing milestones with Mentor B. System suggests milestones C. User decides their own milestones.
- Discussing path for milestones with Mentor A. System suggests actions to achieve milestone. B.System prompts user to come up to with actions to achieve milestone.
- Peer suggestion. A.System suggests that the user practice with fellow user. B. System suggests that the user simply connects with other users.
We got good findings from the testing. Some of them confirmed our earlier assumptions and some of them were new:
- Users want to establish milestones/activities with a human.
- People need some convincing on why the system’s intelligence is beneficial.
- Users had mixed feelings about connecting with peers, however, none of them wanted to practice with the people they did not know.
- AI Limits: Provide scaffolding for questions
- Edge cases: How does the system deal with edge cases( very specific mentor needs or inconsistent use input)
- Users like the system to have a personality.Interaction with an AI need to natural and organic.
A/B Testing with Wireframes
Melody, Denise and Joe worked on creating the wireframes. They then tested them out with 3 different users:
Mentor long engagement: Do users prefer the AI to scaffold the conversation with links and resources during conversation with the mentor or they prefer notes from the meeting later.
Resource suggestions
Chat plan
Insights
- Be transparent about the decisions that AI makes.
- The virtual agent should not be distracting during face to face conversations.
- Reflections should be quick and use a combination of visual + written interactions.
Notes from chat with Qian Yang
We showed our storyboard to Qian Yang on Friday. These were some of her comments:
- I really like the introduction connection part, the part that goes beyond the chat bot, add the introduction why you are being recommended to a mentor, so that there is reasoning behind the decision
- If you know what both the mentor and mentee want, you will have a more interesting connection for the users.
- How do you know people will follow through on the activities?
- What type of mentors have the motivation to follow through?
- The wingman persona is something you want to preserve, but you also want to preserve the agency a user has.
- Mentorship happens very naturally. It is difficult to find a mentor immediately after getting a new job. You need to take the social component, so people understand why they are the perfect mentor or mentee.
- What is the humanness of the connections that are being provided to a user? Why would a user want to click on the recommendation.
Putting together the presentation:
We spent Saturday and Sunday putting together the presentation.