Imagine getting a message that your best friend, bachelor since forever, is finally engaged to the love of his life. And as you read the message, your overjoyed reaction attaches to his text bubble as an emoji while you smile in real-time.
Like so many of us do today, deliberately choosing an emoji or text-based face while texting carries a conversation in the ‘ideal’ way. Yet it also effectively removes all nonverbal cues present in Face-To-Face interaction. There is no way to read body language expression from behind a screen. So a smiling emoji is far from a truthful tell to a person’s emotions.
To test the effectiveness of expression-triggered emoji, my predecessors at the University of Washington created ReactionBot, a system that attaches emoji based on users’ facial expressions to text messages on Slack. It identifies facial expressions via the user’s webcam and automatically attaches corresponding emoji to messages.
The study was conducted on linked pairs of people (known as ‘dyads’). All of the users were seated in front of webcams with their facial expressions unobstructed. For the trials with ReactionBot, camera captures from the webcams were taken at set intervals and matched to a specific emotion while they sent messages through Slack. When the certainty was high enough, the emotion was pulled from 7 major emojis and attached to the message.
All of the pairs were instructed to complete a decision-making task with no communication outside of Slack messaging.
The emojis were attached in one of two ways:
1) Receiving Scenario, where User A reacts to a message sent from User B, and attaches an emoji to the text from B. (I laughed at your message.)
2) Sending Scenario, where the emotion of User A, who is typing the message, is sent with the text to User B. (I smiled while sending this message.)
The initial hypotheses predicted that users who chatted with ReactionBot enabled would manually use fewer emojis and report higher social presence (how interdependent the two users are, how connected they feel, etc.) with their partners.
The experiment concluded that while the overall number of emoji used was approximately the same with or without ReactionBot, participants use fewer emoji of their own accord when ReactionBot is in place. The system picks up on a fair share of the nonverbal cues that users would otherwise have to input themselves. The majority of the emoji that ReactionBot users used were automatic and not self-triggered.
The conclusion reached through post-study interviews was that ReactionBot provided valuable nonverbal cues, offered more genuine feedback, and increased self-awareness of emotions. In conversational situations (e.g. an online psychiatrist or doctor’s meeting) where facial cues are telling factors of communication, ReactionBot increased trust.
However, contrary to the initial hypothesis, ReactionBot actually reduced behavioral interdependence between users. Participants expressed that the system created some distraction from their task. In particular, they reflected that certain environments would not benefit from this kind of emotion portrayal.
In business or task-based situations, the emojis were unnecessary or even hurtful to the success of the activity. When ReactionBot was incorrect, some participants felt surprised about the misleading emoji (but, this can be fixed with a more sophisticated version of the software).
An interesting extension of this research is whether expression-triggered cues can accelerate the formation of a relationship online. Perhaps ReactionBot’s functions are relevant and worthwhile to consider in counseling or dating websites. A follow-up study may help better demonstrate the specific role of sharing facial expression in deeper relationship formation.
For more details, please see the full paper, ReactionBot, Exploring the Effects of Expression-Triggered Emoji in Text Messages, published in November 2018.
In conjunction with @Prannay Pradeep at the University of Washington.