Phase 5 | Reflection & Communication| 04.16.2018

Process documentation for Interaction Design Studio II, taught by Peter Scupelli at Carnegie Mellon University. Team comprises of Zach Bachiri, Devika Khowala, Hajira Qazi, and Shengzhi Wu.

Presentation & Feedback

We presented our synthesis and outcome of the evaluative phase this week. The feedback was mostly positive with questions around the initial source for data and the positioning of the platform. We’ve been addressing these questions as we fine tune our concept and start developing hi-fidelity screens.

Findings of our evaluative phase

Role of AI

We need to outline the role of AI and its functioning in more detail for our final presentation and highlight its value in the cultural learning aspects of our concept.

How will the Bot parse the data available on the Internet for things like Information about Pittsburgh

How will AI identify opportunities and breakdowns. What data sets would the system need to be trained on?

Schedule

We made a plan of action for the coming weeks so that we have time to finish before time and fine-tune things if required.

Plan of Action for the final presentation

Visual Design

We are using Pinterest to collate our ideas about the visual design of the our system. Since our platform is chat-based we are planning to design a simple, clean and functional design. We are simultaneously using the space to collect ideas about the visual design of our bot and illustration and icon style for our platform.

Our pintrest board for the visual design

Onboarding

On-boarding is a vital step in an interaction. It is where a product establishes trust with the users. This we spent some time discussing the appropriate on-boarding experience for our platform and what would be the best way to introduce the key features of the platform. We came to a conclusion that an effective onboarding process for us will be a combination of quickstart and top user benefits. This would allow us to highlight the value proposition of the platform while simultaneously explaining the core features of the platform and introducing the AI agent.

Bot persona

We also discussed the Bot persona for our platform. We have been looking at the existing personas for understanding best practices. O’Riely’s book ‘Designing Voice Interfaces’ has been really helpful too.

Bot persona

We brainstormed together on a name, thinking of different words (like new, beginning, connecting) in a range of different languages. We wanted the bot name to be gender neutral but also short and simple to pronounce. After some discussion, we settled on “Naya,” which means “new” in Hindi.

Visual Design

We met together and decided on a visual design direction, which we started to apply to our wireframes. We also finalized some of the icons needed for the screens. We still need to design a logo for the app and the bot, which is scheduled to be done next week.

Beginning to formulate high-fidelity mockups

Onboarding

We worked some more on the onboarding process, thinking through what the process would be for signing up, what a tutorial might look like, and what features would be highlighted. We started making wireframes for those screens and working through some of the finer elements of the onboarding process. Within the next couple days we will finalize the copy and phrasing and make high-fidelity versions of those screens.

Onboarding wireframes
Tutorial mockup

Video Planning

We started out by getting together and writing a rough outline of what we wanted to cover in the video. We then translated that to a script, and then from the script we created a storyboard. Hajira recorded herself reading the script, which we will then lay over images from the storyboard to create a rough cut video. This should reveal areas we still need to work on before the actual filming.

Rough cut video

Presentation Preparation

We began preparing an outline for our final presentation, which is still in process and subject to change. For now, however, we will begin with our problem statement and then a quick listing of the research methods we used. We will then explain our concept and the theory of learning undergirding it. Then we will show the concept video, which should communicate a lot of the features and interactions of the app. We will then show our systems and AI diagram and quickly touch on the features, and then end with future considerations.

User Testing and Iteration

We decided to do some user testing again on some of the features that were unclear. Based on the feedback so far, the question mark icon was unclear, so we decided to change it to an “i” for “information.” Tapping on it will prompt an automatic Google search and offer the top result. We also realized there was some confusion in how the suggestion chips should work. We decided that the suggestion chips would be for the person who might need the information, and only in cases where an automatic message can easily be created. For more personal topics where the bot can’t autogenerate a message, a prompt will appear instead, such as, “Maybe you can tell YY about your favorite artists.”

We also realized the importance of having relevant sample conversations in the mockup. In one phase of user testing, the text did not make sense relative to the question mark/save feature we were testing, and so the feature was confusing to interpret. When we updated the conversation, however, the user was able to make immediate sense of it.

Feedback on the overall concept was positive. One participant said that she wished she had such a system when she moved to Europe for a year. Two users liked the idea of bookmarking a message to refer to later, and said that it was very useful.

--

--