Facilitating conversations that promotes bi-directional cultural learning

Phase 4 | Evaluative Research | 03.19.2018

Process documentation for Interaction Design Studio II, taught by Peter Scupelli at Carnegie Mellon University. Team comprises of Zach Bachiri, Devika Khowala,Hajira Qazi, and Shengzhi Wu.

We re-grouped after the spring break and went over the feedback from our last presentation on generative research. We received some very constructive feedback and it helped us to outline some vital design implications for our design going forward:

  • To learn social norms, interactions have to take place between humans.
  • An AI system can help facilitate that interaction but cannot replace it in anyway.
  • Enabling a learning environment that is bi-directional and helps both Americans and international students learn about each others cultures adds a unique value to the proposition.

Based on our research, initial speed date findings, and feedback from our peers and faculty, we have zeroed on to a direction we are interested in exploring as a group.

We are focusing on building a system that leverages AI to facilitate a two-way learning experience through a conversational social network platform.

Why a Social Network Platform?

Aren’t there many social network platforms already? How would ours be different?

During the generative stage, we did some quick speed dating with the main concept directions we had. We got positive feedback on two of those directions but the bi-directional learning could be most naturally facilitated on a social platform.

We feel that rather than looking for more concepts our time and research would be better invested if we focused on developing and testing the interactions that would occur through the social network platform.

We started defining what our system would look like:

What would the system be like?

  1. Types of facilitation:
  • Prompt new topic of conversation (cultural difference)
  • Use of native slang / terms (e.g. OMG)
  • Social norms explanations / suggestions

2. How would the AI assistant facilitate

  • How do you promote interactions between international and native students?
  • What situation needs to be intervened?
  • What methods are appropriate for intervention? When?
  • What data is required for machine learning?
  • What platforms will we employ for our facilitation tool?

3. What would the Platform include:

  • Student-to-student chat
  • “David bot” facilitator / assistant (open questions)
  • Interest-based buddy matching system that gives 5 (?) nameless / faceless options of ppl to connect with
  • Group chats (form their own, with some structure provided)
  • Announcements (?)

Concerns and how we’ll address them:

  • Hesitate to reach out to strangers; how do you build trust with strangers?
  • Privacy concerns
  • Does it create dependency? How long is it used once something is learned?
  • There are a range of ppl from different countries and cultures arriving to US; how do you address specific needs and backgrounds / proficiencies?
  • Understanding and being more sensitive to needs of one immigrant will help interactions with all immigrants
  • What is the motivation / incentive for Americans to use (and users in general)?
  • What platforms will we use? Just mobile?
  • Will we create a network for international students to connect and share information, too?

Bruce / Peter conversation

  • How does our intervention fit into orientation programs CMU already provides?
  • Are we focusing on undergrads / grad students?
  • Does intervention change depending on city to which students are arriving? Rural vs. city
  • On a campus, everybody is in a new environment and learning new things
  • How to address ppl with different exposures and proficiencies to American culture / English language.
  • Might want to talk to somebody involved in undergrad orientation (and international student orientation) about what they teach incoming students

We’ve decided to brake down the process into smaller tasks and tests that can help us the strengths and challenges of creating a system like this. we think this strategy will help us to move ahead by allowing us to take into consideration the smaller aspects of such interaction.

Next Steps:

1. Conversation research

  • Find participants
  • Write instructions for participant conversations
  • Research existing platform
  • Will match people within the entire Freshman class, then offer more specific details / segmentation.

2. Rough prototype of platform

3. Test the bot / facilitator

initial test with the chatbot

Timeline for Evaluative Research

Week of 03.26.18

Meeting with the Office of International Education (OIE)

Devika arranged a meeting with OIE for March 28, which ended up being very useful. From it we learned that our intervention will most useful for grad students because most undergrad international students have lived in the US or went to international schools so have a greater fluency with language and American culture. Also, undergrad international students are a very small number of the total international students at CMU. The women we spoke with mentioned that there is not a lot of engagement with OIE prior to the students’ arrival in the United States, so pre-arrival is a good point of intervention. It is also a time when students are most excited and at an emotional high (our journey map exercise confirms this), so it would be the best intervention point to capture students’ attention. Other points that were raised:

  • Transitional issues exist for both international students and American students
  • The OIE does an “adjusting-to-life presentation” for international students at orientation. That could be useful for our research
  • 3500 international students are from India and China alone, so people from other countries don’t have as much of a community on campus and can feel isolated
  • A large part of OIE’s work is connecting students to other relevant resources. Might our platform help with that?

The Platform

We spent a lot of time this week planning what the platform would look like. We began by listing out the features the platform would have then the details of how it would work. The following is a list of features, details, and outstanding questions.

General features / structure

Onboarding

Information provided by University

  • Language proficiencies / TOEFL scores
  • Department / Degree pursuing
  • Whether international student or not

Entered by user

  • Bio
  • Interests — categories of options that user selects, not entered manually
  • Photo

Profile page

Student-to-student chat

  • Can we incorporate video chat, and then give feedback after the fact?
  • What about training about things like eye contact and nonverbal behavior?Can we intervene and prepare for the shift from text-based to face-to-face conversation?

Markers

In individual and group chats, users can mark which words or portions of a conversation are confusing, at which point the bot will prompt the other user to explain. The person on the other side of the conversation will not know what specifically was marked. Users mark what is…

  • Difficult language
  • Confusing
  • Cultural difference / rude?

The bot learns from markers and explains back to users.

Most users understand “reactions” as a response or indication to the other person in the conversation. How can we overcome users’ mental models of conversational chats and reactions? Should the system collect things marked confusing so that they can review later?

Bot facilitator / assistant

  • Users can ask it open-ended questions
  • Intervenes in both one-to-one and group chats because it increases potential for machine learning
  • Types of facilitation:
    Prompt new topic of conversation
    Explanation of native slang / terms (e.g. OMG)
    Explanation of social norms
  • Can the bot support anxieties of incoming students?
AI Persona for our bot named Alex

Interest-based buddy matching system

We spent a lot of time talking about and debating how the matching system would work. We thought about several different options, and then decided that allowing users to choose to connect with one or several people from a few different matches gives them a sense of ownership and commitment to the relationship and also mitigates the problem of the “buddy” not working out due to personality clashes.

Mapping out and narrowing down options for matching

Matching is an important part of the learning process in our concept, so one-to-one conversation important, and also will help overcome initial anxiety about talking to new people. One-to-one matching is also important because deep and personal conversations don’t usually happen in groups. Group chats are usually used to share jokes / posts / information, etc. However, we agreed that there are diminishing returns on value of matching—the quality of conversation and relationship will go down as # of matches increases.

Features of matching:

  • Offers five 5 or so nameless / faceless options of people to connect with. Names and faces appear once users are connected in a conversation, in order to avoid biases.
  • The user can select one or all of the matches by clicking “Start Conversation.” Once selected, an automatic conversation is started to overcome the initial awkwardness of talking to a stranger.
  • AI matches based on needs and English / cultural proficiency
  • Users can search names and communicate with people other than those they’ve been matched with
  • Does a person have the option to not be recommended to and contacted by other people?—privacy setting options?
  • Can the user X out the people s/he doesn’t want to talk to so new ones appear?

Group chats

  • Users form their own groups, with some options and structure provided
  • Can group chats be used to invite people to cultural events, such as an #Events group?
  • A means for people to find new friends, other than those they’ve been directly matched with. AI can learn from this to develop a better matching system

Announcements

  • Bot can intervene or prompt user for university-related / orientation tasks
  • Can the bot learn from frequently asked questions and preempt?

Wireframes

We each made rough wireframes for the landing page to start to think about how the platform would be structured.

Zach’s wireframes
Shengzhi’s wireframes

Conversational Research

Method

We plan to get some sample conversations with incoming students and test the conversation between three main sets of students, then analyze the findings.

  • International students and Incoming international students
  • American student and Incoming international students
  • Incoming American student and Incoming international students

The CMU School of Design creates a Facebook group for each incoming class every year. We asked to be invited to the group so we could reach out to the incoming class to see if they would be interested in participating in our research. Once we had permission to do so and were invited to the group, we created a Google form that we sent out to the entire incoming class. We asked them to identify whether they were international students and whether they were most interested in communicating with current or incoming students.

Invitation to participate in research

We got ten responses, 7 of whom were international students. Nine out of ten wanted to converse with current students. We weren’t anticipating that, so we had to find other Master’s students who were willing to communicate with incoming students. Thankfully several people agreed to participate.

We next set out on matching the students based on whether they were international students or not. If so, we tried to match them with American students; if not, we matched them with current international students. We wrote all the names of the participants on Post-Its and matched them accordingly.

We then set up the conversations in Facebook by creating a message group with the two people who were matched. We sent an introductory message to facilitate, but then did not check back or interrupt the conversation until we ended the research 4–5 days later.

Results

Sample conversations

Some of the conversations went on quite a bit and some did not last more than a couple lines of text. We printed out all the conversations on large tabloid paper, and then each read the conversations and highlighted potential points of intervention. Most questions were logistics-based or information-seeking, and the conversations covered three topic areas:

  • Pittsburgh: housing, climate
  • MA/MPS/MDes details
  • Personal background: career, interests, travel, exposure to US, etc.

We found that there weren’t as many breakdowns in conversation as we expected, which may mean that we will have to reevaluate our approach a bit. Shengzhi brought up that when strangers speak, they’re more likely to be friendly and polite, whereas with friends, you’re more casual and are less guarded, so breakdowns are more likely to happen. The participants were writing in complete sentences and in detail, so fewer misunderstandings were likely to happen.

We will each re-read the conversations and codify them for further analysis.

Some issues/questions that were raised:

  • We didn’t realize how much responsibility would be on current students to answer incoming students’ questions. Perhaps it would have been better to match incoming to incoming.
  • Can we prompt international students to share something about their culture, too?
  • Was there an observation effect from knowing the conversations would be analyzed later?
  • Most of the responsibility was on Americans to explain things about Pittsburgh, weather, school, etc. How can we make this more bidirectional so the onus isn’t on one person to teach?
  • International students not really aware of the cultural differences they’ll encounter until they get here and experience culture shock. Can the platform prepare them better?
  • Is our intervention occurring too soon in the transition process? Aside from logistics, what are we preparing them for? Person-to-person issues are only realized once here and experiencing culture shock.
  • A lot of the conversations died off. How to sustain and keep ppl engaged?

Week of 04.02.18

Conversation Findings

After reading and coding all of the conversations we collected, we were able to enumerate some key findings. They’re as follows:

  • The learning in the conversations were not bidirectional enough.
  • The timeline of the conversation seems to matter in this case. Culture shock is something that becomes apparent upon arrival, so people were not yet aware of the things they will need to learn.
  • There were not enough points of breakdown in these conversations to use as the only context for cultural learning on the platform.
  • There is a big difference in the needs expressed by incoming students vs. current students.
  • There are more opportunities for learning in the conversations than breakdowns in communication.
  • Keeping people engaged in conversations will be important, given how many conversations ended quickly.

In general, the main takeaway was that there were not enough conversations breakdowns with incoming students to solely focus on facilitating those conversations. As a result, we began to think about how we could create more proactive bidirectional learning in the platform. Specifically, features like directed prompts integrated into 1-on-1 conversations and/or a group with a bot suggesting cultural learning topics and could be effective.

April 2nd meeting

Need vs. Want, Platform Features

Thinking about this more proactive system prompting, we realized that the topics of learning for international students maps nicely onto a want/need matrix. We mapped the topics of learning into the four quadrants, and this helped us think about what features would address what topics.

The biggest learning from this exercise was the identifying the “high need/low want” quadrant. This quadrant of information are the ones that students are not aware they need, but would benefit greatly from learning it. Since they don’t want it (or even know about it) the system will have to prompt this information proactively. Prompts in conversation and prompts via notification can distribute this information.

The main interactions through which learning can happen are as follows:

  • Bot prompts directly into conversation
  • Bot prompts user through notification
  • User asks bot for information
  • User talks to other students

Wireframes

With the insights we gained from the analysis of our conversation research, we also continued creating wireframes for the platform. We began designing the in-chat bot interaction and the matching components of the platform.

User Testing Plan

With our wireframes coming together, in the coming week we plan to begin testing our platform with international students. There are two main things we’re hoping to gain insight into. First, we want to explore the best type of bot intervention in the one-to-one conversation. We’re exploration more aggressive ways, like the bot intervening directly into a conversation, and more passive ways, like allowing the user to expand information when they want. To do this, we are going to create different clickable prototypes for each of these concepts and speed date them with users.

Second, we want to determine the proper tone of the bot. Should it be serious? Casual? Playful? We hope to determine this by presenting sample dialogue that we have created and getting feedback.

Example platform dialogue and bot intervention

--

--