MDes Studio II

Weekly Reflection

Week1: January 22

For this semester, we are tasked to work on future of education, specifically around “how artificial intelligence can enhance future of learning, teaching, and education.”

The concept of artificial intelligence or smart computing has been around for decades, but it brought into light more recently with the introduction of AlphaGo, Alexa, and Watson cognitive computing. With that said, jumping into the topic of ‘AI and education’ gives us a lot of room to explore, connect with working professionals, and have an actual impact on the world.

Before kicking off the project, my team and I met to draw a team contract. We discussed our preferred working style, leadership roles, strengths and weaknesses, and aspiration on the project. I found it useful to write team contract at the beginning of the project because we learned about each other and came to consensus on our overall goal for the project.

We also discussed areas within education and AI we want to explore during this project. We found medical education has vast potential to improve with the help of artificial intelligence and high technology available today because medical training has difficult learning challenges and a high stake [in other words, it could have a severe consequences when misled]. Also, the healthcare industry is a care-based service industry so we can look at holistic picture of education service happening in the field.

Week 2: January 29

During Research Method class taught by Bruce Hannington, we were introduced to a territory mapping technique. We found that territory map exercise will help us define the scope of our project for the class and will help us get consensus on the opportunity area we want to explore during this semester.

We explored different types of learning happening in the healthcare field and defined stakeholders involved in each learning type. We also identified learning as “helping learners perform their job better” and highlighted key aspects of learning as “practice, communication, behavior, skill, confidence, simulation, and efficiency.” While working on the territory map exercise, we found that we are interested in communicative learning aspect in healthcare; for example, learning about appropriate ways to communicate and read different aspects of communication methods such as emotion and body language. The communicative learning in healthcare could encompass building trust and prevent medical errors due to miscommunication. It also deals with cultural sensitivity, emotional response and knowledge retention. Previously, this kind of learnings are mostly experience-driven, so there is a potential opportunity for AI to be leveraged.

How might we use artificial intelligence to improve communicative learning amongst stakeholders in healthcare.

We presented our territory map to the class and realized that we need to further refine our territory.

Here are some feedback we’ve got during the presentation:

  1. Try to identify the communication break down.
  2. Do we consider mental health?
  3. Besides doctor, how to be a good patient is also important
  4. Besides communication between care providers and care receivers, we could also consider communication between care providers.
  5. We should do secondary research about previous student projects and current application of AI in the field.

After the presentation, we individually researched existing learning tools and communication methods in the healthcare context. For example, we researched visual communication and digital products used in hospital, role of artificial intellegence, AR/VR in medical training, current clinical workflow and communication between care team, AI tools doctors currently use to better understand patient and upcoming medicine options (e.g. Watson Oncology and more), and AI tools patients use to understand their disease and communicate with doctors (e.g. AiCure, Sense.ly, etc).

Some of the biggest takeaway from our research are:

  1. There are a lot of existing products focus on either doctors or patients but not many for both.
  2. There is a generational gap between doctors — experienced doctors don’t utilize machine learning while newer doctors are starting to get introduced to the tool.
  3. There is a tremendous opportunity to improve healthcare communication between doctor and patient using interactive display and AR/VR.
  4. On the patient’s side, they like to have personalized treatments. For non-English speakers talking to a machine can be more comfortable and relaxed than to doctors directly.
  5. There is not a formal “learning” platform for caregivers (family/friends) of chronic disease patients.
  6. Currently, simulation techniques using AI and AR/VR are used mostly for medical training or practice purpose (eg. baby heart surgery)

Through the team discussion, we better understood the difference between “learning” vs. “assisting” in the healthcare communication and education. We defined each opportunity as “learning” and “assisting” to determine whether the topic will suitable for our prompt. After few hours of discussion within the team, we further narrow down our topic as “helping the new generation of a medical students be more prepared to be good doctors in today’s healthcare context which incorporate emerging high-tech tools such as artificial intelligence, augmented reality, and connected devices.”

How might we help prepare a new generation of medical students to envision a practice incorporating with AI?

Week 3: February 4

During this week, we focused on exploratory research based on the research question we have developed in the previous week.

How might we help prepare a new generation of medical students to envision a practice incorporating with AI?

We began our exploratory research by doing some more market research and building interview and survey questions. We have identified our main stakeholders as medical students, doctors (teaching doctors), and residents doctors (interns). With interviews and survey, we wanted to figure out:

  1. How does current clinical workflow works?
  2. When do professors feel necessity to teach medical students about more real-life practices? (e.g. know-how on how to communicate with patients, work in teams, surgical practice and workflow, etc.)
  3. How the process of becoming doctor works? (e.g. process of getting into medical school to deciding specialization, residency, and becoming practicing doctor)
  4. How does professors evaluate their teaching? How do they know whether their teaching was successful or not? How do they evaluate student engagement?
  5. How does student evaluate their learning? How do they know they are doing well in school and getting the knowledge they need to know?
  6. How is their [medical student, doctor, resident] typical week or day look like? (classes, practice, lab, etc.)
  7. How do doctors build trustworthy relationship with their patients? Do they teach this know-how to their student at all?
  8. Whether the practicing doctors and medical students are familiar with emerging technology such as artificial intelligence and augmented reality.
  9. Types of resources medical students, doctors, and residents are currently using to supplement their learning and teaching experience.

Survey questions: https://suzannechoi.typeform.com/to/mvII63

Interview quetsions: https://docs.google.com/document/d/1duR4AxfH5vNYgdpOCVVSK2srUN-QcHVsENzndYa9DT0/edit#

Interview contacts:
https://docs.google.com/document/d/1-AwWgtpKP3rUjjMx1YzFsHCr94NSUTJduGQteKx0vnM/edit?usp=sharing

With this goals and questions in mind, we began to reach out to our stakeholders. We also reached out to practicing professors in Human-Computer Interaction, Computer Science, and Robotics departments to learn more about how artificial intelligence and machine learning could be used in the medical setting.

We individually (or in teams) conducted interviews with our contacts and shared our findings with the group later. I found it useful to discuss goals of the exploratory research and formulate interview/survey questions with the entire team before we individually conduct the interviews because it helped align the team better and help us be clear in our objectives when we converse with our stakeholders.

Week 4: February 11

This week’s classes were consisted of two lectures by Neil Hefferman and Amy Ogan. Peter Weeks from Phillips also gave us inspiration and tactics on exploratory research during Resarch Methods class.

Neil Hefferman
Neil Hefferman stressed the importance of crowd sourcing in computer aided designs, reinforcing student motivation for deeper engagement, and giving appropriate feedback to teachers and students for meaningful conversation.

His lecture was insightful because it helped me:

  1. See the potential of peer learning environment
  2. Try to understand underlying motivation for medical students
  3. Analyze current way of evaluation and feedback system.

Amy Ogan
Amy Ogan presented few previous education project she has done for literacy building in rural schools of Tanzania and Latin America. Her Tanzania project was interesting in the fact that she put users first in her design process. She tried understand Tanzania culture and children who are attending the rural schools in the area. Based on her understanding of users and the culture, she developed appropriate solution to engage user in the learning experience.

Her lecture helped me:

  1. Realize the importance of understanding context and users
  2. See potential of flip side learning (“You understood the concept if you are able to teach”)
  3. Realize ‘playfulness’ is not always the solution. Adults gets frustrated when learning is presented as playful or not serious. We need to understand underlying motivation of the user.
  4. Realize importance of regular feedback and goal setting.

Peter Weeks: Phillips
Peter Weeks from Phillips design helped us practice exploratory design exercises that helped us synthesize our research into tangible outcomes. Within limited time, we were able to clearly see opportunity areas and pain points based on interviews and survey we have conducted previously.

Outside of class time, we conducted 15 interviews with medical students, interns, residents, doctors, and designers and professers working in healthcare or machine learning industries.

During this primary research phase, we faced difficulty in getting full attention and participation from doctors and medical students because they are extremely busy. We asked UPMC innovation designers some tactics to work with doctors and medical students and they gave us some good tactics inclduing:

  1. Prepraring co-design activities and forms prior to interviews
  2. Give them things they can complete at their own time [when they have time]
  3. Be specific and build time restrictions for your questions

Another challenge we had during interviews and synthesis is that we had too broad focus. We realized we need hone into specific opportunity to manage our time and effort better. To specify our context and users, and to derive design principles from our research, we’ve undergone below process.

Focusing on 3–4th year medical student on communication skill building:

During our prioritization mapping exercise, we decided to focus on 3–4th year medical education experience (clinical) because then medical students can apply and practice what they learn in the right context (hospital) at the right time (during rotations).

Also, we saw misalignments in what’s been taught in medical school and the actual clinical workflow, so we aimed to help 3–4th yr medical students to transition more smoothly by providing real-life like education and simulation opportunities where they can practice and learn communication skills.

Week 5: February 19

We presented our exploratory research to our classmates on Wednesday, February 14th and received good feedbacks and criticisms that helped us move forward. Based on feedback, we realized we need to think more critically about the machine’s role in ‘communication skill learning.’ This reminds me of our interview with John Zimmerman, a professor at HCI department who focuses on machine learning. He also mentioned that AI solution would be applicable and plausible only when we have enough and accurate data to support the outcome. Currently, there may be not enough data for communication [or soft skills] in the healthcare industry. However, I believe we can imagine future of healthcare and how we can efficiently use crowdsourcing to populate more data around soft skills.

Another realization is that we need to define what is ‘good communication.’ Being ‘good’ can mean differently to different people so it could be tricky to evaluate their learning if we focus on communication. However, I believe this gives us an exciting opportunity to think about and possibly standardize goals and ethics medical students should have while communicating with patients, care teams, and their peers.

Week 6: February 26

This week, we had concept generation workshop with Austin Lee and Jae Kim from Microsoft. The workshop consisted of three parts:

  1. Imagining your problem could be solved by current AI technology.
  2. Imagining your problem could be solved by AI in the next 5 years.
  3. Imagining your problem could be solved by unlimited technology.

The workshop was valuable for us because it helped us think about future possibilities while considering the context and users. Since there was a strict time restriction for each phase, we were able to work focused and agile.

Within 15 minutes, our team was able to generate a concept of medical education assistant JASZY and develop a scenario where JASZY could intervene in our user’s daily lives. JASZY is an AI-powered clip that listens to interaction/conversation between a patient and a doctor and summarizes notes for students. It also creates quizzes and augmented scenarios based on data collected from field study which students can practice the communication skills at home.

During the workshop, we were also exposed to many advanced technologies that are currently used in the enterprise setting. ‘Emotional expressions’ of the AI was particularly interesting to me. Previously, emotional expression mostly is done in very literal ways, for example using an avatar or human figure to mimic humanistic emotion of the computer. However, with today’s design movement towards minimalistic design, many companies and designers try to give emotional responses to users in more abstract ways, for example with color, figures, simple geometric shapes, and motion. Providing appropriate visual and audio feedback of system status with emotional expression would worth further investigation.

Week 7: March 4

We focused on generative research this week and spent much time on planning generative workshop and text-based diary study. With both workshop and text study, we wanted to understand better how medical students mater communication skill and where communication breakdown happens in their everyday lives.

To get the most out of our workshop and text study, we had to plan and test several iterations ahead of time. After we identified objectives of each study, we quickly made prototypes for workshop materials and tested with our classmates and one test subject from University of Pittsburgh medical school. While testing, we identified methods of delivering explicit instruction and guiding questions as well as an ideal logical flow of the workshop and text study. During the process, I realized that the preparation phase is as important as the outcome. There also were many logistical kinds of stuff that we needed to consider: booking workshop space and catering food, recruiting participants and arranging incentives that will attract participants, etc. It was a great experience to plan and organize workshop before jumping into the real-work environment because this knowledge and experience will help us more comfortably work with clients and users later.

The workshop was a great success; we were able to gather more than 20 participants ranging from nursing students, instructors, residents, 1– 4th-year medical students, and doctors. It would have been ideal if we were able to gather 3– 4th-year medical students since we are designing educational experience for them, but it was difficult to decipher for only 3– 4th-year students from UPMC cafeteria. I am actually glad that we talked to a range of different stakeholders because we learned about different perspective each stakeholder has in the medical education and communication. Facilitating workshop on the spot was pretty challenging due to unpredictable nature of participant responses. However, we all managed to modify questions and instructions as needed quickly. Overall, it was great to learn about their personal experience on medical communication and co-design with the users on an ideal scenario.

Text-based diary study was also very interesting. We launched our text study on Monday, February 26 and exchanging text messages with medical students (3–4th year) every day about their communication skills and stress level throughout the day.

One of the challenges I faced with the text-based study is that medical students quickly identified our question pattern and avoided giving thorough answers to our questions. Therefore, I modified questions or asked additional questions based on the participants’ answer pattern (personalization here!). I also learned to send additional visual aid to guide their answers was an effective method for controlling their behaviors.

Week 8: March 12

We focused on synthesizing our generative research and ideating concepts this week. There were a lot of useful insights coming from the workshop and the text-based diary study. Thanks to the affinity diagramming and priority mapping, we managed to synthesize the vast amount of data and insights into digestible chunks of ideas.

Synthesizing research

We faced challenges in synthesizing text-based diary study because we did not have enough data to generalize and find causes and pattern for communication outcomes. Our participants joined our study from different dates, so we had only a few days exchanging texts with our participants at the time of the presentation. Therefore, we decided to show communication variation and causes for a participant who we had the most data of, instead of generalizing the communication pattern for entire participants.

Finding stress pattern and understanding our participants’ journey in micro-day view was easier becuase we had additional data and insights from the workshop.

During the process of synthesizing text-based diary study and workshop, we found additional insights that guided us through the ideation process. The key findings are around causes of stress and communication outcome. Before the studies, we assumed that doctors and medical students’ stress level would correlate with the level of complexity in the procedure and knowledge (either concept is hard, or the procedure is complicated to perform.)

However, we found that the unpredictability in human to human interaction causes a lot of stress for doctors and medical students, and they feel most peace when they don’t have to interact with human beings (e.g., performing complicated surgery in the surgery room).

Ideation

Considering the insights we’ve gathered from exploratory and generative research combined with our design principles, we generated 3 different concepts to move forward. We are going to speed date these concepts after spring break and figure out which concept will be worthwhile to pursue further.

During the ideation phase, I personally found 2x2 prioritization methods for narrowing down concept really useful. By putting our ideas into post-it and put it on whiteboard helped ourselves detach from the idea so we can be more open to criticisms. By mapping our ideas into a 2x2 grid of impact and feasibility, we were able to quickly identify which ideas would be feasible with current technology and cost-effective while still have a high impact on our users. The 2x2 prioritization exercise also helped putting ourselves on the same page and motivated us to on-board with the concepts we developed.

Week 9: March 26

During this week (+ spring break), we continued the text diary study and redefined the context we are designing for. During the ideation phase, we found ourselves ideating on the different time frame (far future, current, near future, etc) so we saw the need to decide on one context that we all agree and excited to design for. After we redefined the context, we revised our storyboards to match with defined future context and speed dated our concepts with 4 participants including residents and 3– 4-year medical students.

Text diary study

We continued the text diary study with our participant and synthesized their responses into summary reports after all the studies are done. The report was for both us and the participant, so it supposed to help us understand our research better and should be effective in communicating our findings. The report showed causes for communication failure/success, communication pattern and variation, stress pattern during the day, and communication-related/ noncommunication related stress factors.

Showing one summary report, we made the report for all 8 participants and sent to them for feedback

During the synthesizing process, I realized how difficult it is to quantify qualitative data. Excel is excellent for visualizing quantitative data, however, it was not so well for the qualitative data. I needed to read through all the written responses and individually highlight common keywords to better picture what the oral/written responses have to say.

I appreciated that I asked additional questions to my participants during the text-diary study to clarify medical terms and abbreviations that I am not familiar with because those clarifications greatly helped me analyze the data when I come back to it after a long time.

Redefining context

During the ideation phase, we found ourselves ideating on different timeframe and saw the need to defined one context that we are all excited to design for. Peter and Arnold also suggested to look at healthcare success stories from other countries and service venues.

The future we defined is: Year 2030 where all patient data is quantified and doctors and patients remotely communicate a lot. Huge part of diagnosis and vital logs are automated, so the significant of judgement and communication skills get more emphasized. Since patient data is quantified, patients are more educated about their health condition. Doctors and residents’ workflow (including scheduling and sign-out process) is more streamlined than ever.

I think speculating on future technological advancements and possiblities was really fun for our team because it broadens the directions we can take for our design. And, since it helped us ideate on consistent context, our revised concepts became more cohesive and logical.

Revising storyboards and speed dating with stakeholders

With redefined context in mind, we revised our storyboards to match with the future setting. We individually speed dated with different stakeholders (including 3– 4-year medical students and Michelle, resident) to get feedback on the concepts. After explaining the concepts, we asked the stakeholder to give us general feedback, rank the ideas, and talk about pros and cons they see in each idea.

I found speed dating technique useful because I was able to quickly see which concept is more desirable for the users who will be actually using the system. It helped us get outside of our design bubble and put us in the shoes of our users.

During the speed dating, we were able to not only validate concepts (that it matches with our user’s need) but also identify some concerns our stakeholders are having. Most of the concerns were around privacy issue. For example, Michelle mentioned whether all the conversation regardless of personal vs. medical conversation should be captured in the device. This feedback made us think about how we can give our users more control over the information that is gathered, and also helped us question future AI capability where the AI could distinguish personal vs. medical conversation.

Week 10: April 1

During this week, we focused our effort on prototyping and scenario making. We were pretty set on the general concept from our speed dating last week, so we wanted to move onto quick prototyping to figure out exact components we need to design. With foam core and quick grayscale wireframes for the user interface, we created video scenario to get additional feedback from our remote participants. With the video scenario, we wanted to get feedback about desirability/feasibility of our general concept, co-brainstorm and populate content for the user interface, and validate interaction sequence of our idea.

writing script, generating wireframes, shooting video footage

During the script writing process, we realized we need to understand better the exact context and communication that could happen between a medical student and a doctor. Therefore, we consulted 4th-year medical student from UPitt for the script to capture the realistic conversation. I really appreciated that we made close connection with medical students while exchanging messages during text diary study, so they are aware of the context of our design project, and we can casually ask for help as needed.

After making the video scenario, I realized that the fidelity of our wireframes might be too high for this stage. Choosing a right fidelity based on the design stage is crucial in getting appropriate feedback. Although we tried to make it look as low fidelity as possible by only using grayscale for the wireframes, UI animation and clean layout make our prototype seem more polished. However, at the same time, showing interaction sequence and animation pattern was necessary since we were thinking of testing with the remote participants. I realized there are pros and cons for testing early concepts with higher fidelity (mid-fi) prototype. To get appropriate feedback from our remote participants, we will have to make sure to tell our participants in advance that this is a rough concept sketch, and we should construct our questions more carefully (in more straightforward manner).

Week 11: April 8

During this week, we focused our effort on surveying video scenario concept, developing personae, mapping user journey, and establishing visual styles. We worked to finalize our idea and generate a framework for prototypes (persona, journey map, and visual style) this week because we wanted to use next week to create mid-fi prototypes for our screen and voice interfaces. With the mid-fi prototypes, we hope to make working high-fi prototypes during MIT Hackathon event on April 13–15.

survey on video scenario

To get more feedback on our video scenario from our remote participants, we developed a survey that contains nine questions each with a picture that refers to the specific moment in our video. We first sent out the questions and video in email format (for participants who are unavailable for phone calls). Soon after, we realized that asking long questions in email format would cause a high cognitive load to our participants, because 1. they might be overwhelmed with the list of long questions, 2. they will have to remember all three scenarios in the video to answer our questions. So instead, we decided to create a survey to ask questions sequentially, similar to a human to human conversation, and to provide them a visual reminder of the scenario that each question refers to.

personae development

We converted our research into four different students and AI personae. Text diary study and primary research greatly contributed to the student persona development, and secondary research combined with multiple ideation exercises led to our AI persona. With student personae, we defined targeted student group’s strength and weaknesses, behaviors, needs and goals, and motivation. And for AI persona, we tried to communicate flexible AI personality and how AI’s teaching model can be transformed based on the user type (novice vs. expert). I believe creating research-based personae is very useful for the design team in the ideation phase because the personae could act as a tangible guideline that design team can follow to ensure seamless integration of their product/ service into user’s workflow by producing designs that actually speaks to needs and desires for the targeted user group.

journey map

With the journey map above, we wanted to identify where and how our AI system can intervene in the user’s current medical workflow. This journey map helped us understand the exact role of our AI system (by distinguishing roles for human and AI/ involvement of communication vs. medical knowledge) and think about how AI can facilitate student’s education.

gathering visual inspiration

It is important that every component in the design system (including voice and visual interfaces) look, feel, and behave consistently. Although it is little early to set visual voice at this point in our project, we wanted to start gathering visual inspiration and set flexible visual guideline to prepare for high-fi prototyping during MIT Hackathon event in the coming week. I found it useful to define AI persona prior to developing visual guideline because adjectives we created to explain our AI’s personality greatly help me search and gather proper inspirational examples, color scheme, motion behaviors, and typeface choices.

Week 12: April 16

This week was heavy prototyping and research synthesis week for us. We were preparing assets and plans for upcoming hackathon event at MIT Hacking Medicine, analyzing and synthesizing video scenario feedbacks, and summarizing our research process into a concise presentation file to communicate our findings and next steps to our classmates.

Assets and plans for MIT Hackathon about Medicine
We prepared some primary assets for prototypes because we wanted to start realizing our idea at MIT Hackathon event. In the planning process, we realized it would be better for us to split into two groups to produce as many components we need help from developers: e.g., voice user interface and virtual reality training module. Also, we also reframed our idea to match with the event prompt.

Visual styles

Since Hackathon has a significant focus on “building and making,” we studied different visual styles ahead of time to jump right into the ‘building’ aspect. The below are the different visual styles and UI elements we studied.

sofia pro (blue theme, teal theme)
apercu pro (blue theme, teal theme)
GT sectra fine (blue theme, teal theme)
apercu pro (different blue options and ui example)

Outcome

Although we planned to use Hackathon as a prototyping opportunity, we were not able to get enough diverse background people to join us to form a team. So instead, we decided to each join teams that could be valuable for us to build a framework for our project. Angela and I joined a group that focused on making medical interpretation service more accessible because the topic deals with communication, and we saw a potential to build a framework for patient API enabled service or VUI. I was also very personally enthusiastic about the topic because it also aligns very well with my interest as well as my thesis topic. Zahin and Jeffrey each joined different teams.

It was very hectic and busy, but at the same time, the experience was valuable and fun. Luckily, people we met at the Hackathon was very enthusiastic about the topic and respective to each other. I believe the collective motivation in our team helped us move through design process much quickly and efficiently. If I were to create the same product alone in two days, it would have taken much more time and mental energy. (Thanks to MediLingo Team). With diverse backgrounds and expertise, we were able to create a simple but impactful solution that speaks to multiple pillars of interest (business, engineering, medical, and design)

We initially thought about on-demand remote interpreter service to provide timely, immediate care for patients with limited English proficiency. Along the way, (with the collective input from the interdisciplinary team) we looked into the potential business models and feasibility in current medical workflow and realized the critical gap we are not addressing–underutilized staff interpreters in the current system. Thus, we pivoted to maximize on-site interpreters in hospital with effective match-making process and utilization of downtimes in-between appointments.

Concept video

With this concept and working video chat prototype, we won $1000.00 Athena Health Best Use of API award!

Week 13: April 23

During this week, we focused our effort on developing research protocol for our user testings and UI and VUI prototyping — In hospital (voice notifications and apple watch reflection), at home training modules (VR and screen versions). We first created a list of different options we want to test with the users to see which components we need to design.

After we decided on different options we want to test with the users, we developed a research protocol to streamline the testing process. Although user testing sequence could be as simple as ‘give user A and B options and ask for preference’ and many would think it would not be necessary to write down each protocol, I found research protocol very helpful because it helped everyone aware of the objective and procedure of user testing. Also, the written down research protocol enabled us quality control each user testing (same sequence, questions, and elements) so that we can make the apple-to-apple comparison on the user feedback we gathered.

We then start designing components needed for successful user testing; sample scripts, Apple Watch UI, VUI notification script, at home application UI including training modules (screen version and VR version) and communication analysis. I first started with quick flow sketch: how users would click through screens based on the scenario we have designed. Making a flow chart first helped me identify screens I need to design in order to convey our concept.

Different visualization of detailed communication analysis.

Week 14: April 30

This week was about user testings. We went to UPMC cafeteria again to recruit and were able to user test our concept with 7 medical students and 5 non-med-related individuals.

User testings was successful and we were able to validate a lot of problems we have identified and test on our assumptions on app sequence, technological preferences, and visualization. We also tested usability of the app, such as clarity of ‘call to actions’, logical flow of the sequence, length and tone of the written language, hierarchy, etc.

During the user testing, I realized that incorporating different learning methodologies such as positive reinforcement, motivations/incentives, and self-reflection would greatly help user engagement and learning. For at home conversation redo, we initially assumed that our users would prefer VR environment better compare to the screen conversation because we believed that VR offers an immersive experience. However, we learned that our users would prefer screen conversation more because it provides a unique opportunity to self-reflect on their facial expressions and body language. Majority of users also commented that screen is more approachable to initiate conversation because of the existing convention such as Facetime and Skype.

Week 15: May 7

During this week, we finalized our concept, Ora, by putting together presentation and concept video. While making concept video and presentation, I was reminded of the importance of storytelling. Since our audience for the presentation will not have any prior exposure to our cumulative research which guided us through the design process, we had to find a way to efficiently and effectively tell our journey to contextualize our design solution. In doing so, we went through multiple iterations of our story sequence to best tell the story of Ora. We also thought about potential questions and prepared some extra slides to answer those questions — privacy, Ora’s machine learning algorithm, Ora’s capability, etc.

Presentation:

Concept Video: