Research in Humor-Enacted Robot Design Implementation

My teammates and I are interested in exploring one unique form of Human-Robot Interaction — using humor to dissolve unpleasant, negative emotions. We identified 3 major design implementations (voice variance, gestures, and facial expressions)of robot and 4 different humor conditions (control, aggressive, affiliative, and self-defeating) that may affect the effectiveness of the robot in reducing negative emotions. And our project outcome served as a pilot study to inform a future research direction in Cornell Communication Department in exploring how a robot may be used as a humorous third-party mediator in situations of team conflicts.

  1. Brainstorming and Literature Review:

In the brainstorming session, we focused on brainstorm the strengths and weaknesses of a robot during social interactions comparing to human beings. After the session, each of the group member picked a specific design feature to work on: gesture, voice, and facial expression.

Brainstorming Session among group members to find key features that impact the social ability of a robot, especially comparing to human.

After the brainstorm session, we looked up several related literature reviews to guide us in creating the prototype. We focused on answering the following questions: 1). How does the robot’s features contribute to its ability to perform humor? 2). How does people perceive the robot mediator’s role in different humor situations? and 3). Do the features of the robot and the humor conditions interactively affect each other? Or does certain features especially effective in a certain humor condition?

One most important aspect of gesture is to provide additional emotion indications to the dialogue. Recent researches in embodied cognition or embodied psychology researches have shown that emotions are complex phenomena and often tightly coupled to social context, in which much of emotion is physiological and depends on embodiment (Panksepp, J. Affective Neuroscience, Oxford University Press, Oxford, 1998.). It is believed that human gestures is not only a part of speech pattern, but also has its own meaning and functions. There are many qualitative research data to describe the relationship between gestures and emotions, such as Frijda’s (citation) emotional body movements table (Table 1).

2. Design Prototyping

We used still image animation to simulate the conversation among the actors based on IRS-approved scripts.
Sketches for Gesture Designs.

To design a more interactive gestural communication, I categorize gestures into three different types: Social Norm interaction gestures, Passive keyword-enacted gestures, and Active keyword-enacted gestures.

Social norm interaction gestures includes gestures in response to any daily social interactions, such as greetings and manners. Passive keyword-enacted gestures refers to gestures that will be generated by the keywords in response to human during a interactive conversation. For example, if someone said good work to the robot, one gesture the robot may want to generate is an initiation of “high-five” (one hand up in the air, palm facing the speaker). Active keyword-enacted gestures are gestures that generated by the robot to enhance the emotions or clarify the message spoken by the robot.

Sketches for facial expression and robot appearance. But for the purpose of this study, we decided to use only the facial expressions.

3. Rapid Prototype Video Example

This is one of the video prototype that uses affiliative humor condition along with changes in tone, gestures, and facial expression.

4. Data Analysis

We collected data with over 50 participants using random between subjects experiment. Each individual received a randomized robot video with different design implementations (see chart for details). And we used ANOVA analysis to analyze the data to see the relationships among different conditions.

G for Group. Each participant will be assigned to G1 — G7 randomly and watch the videos of each group (G1 will watch G, G1–1, G1–2,… G1–7. )

Looking at participants’ response in terms of humor conditions, we concluded that Affiliative humor is rated with the highest score (the most positive and comforting) comparing to others conditions. However, even though the control situation was rated the lowest in the effectiveness of humor, participants believed that the control situation and the affiliative humor are both pleasant, and the aggressive humor and self-defeating humor does not provide a positive environment (Graph 1). According to the statistics, most participants agree that self-defeating and aggressive humor are funny, but they create a negative environment instead of a positive, comforting environment.

In terms of design implementations, although we expected the robot with all three features implemented to be the most effective and comforting robot out of all the scenarios, the data shows that a robot with just voice and gesture is rated the most effective in creating a positive environment and comforting people (Graph 2). One interpretation to this is that voice and gestures are enough to make a robot socially effective and participants may feel overwhelmed when combined with the facial express. Or it could also because our prototype of facial expression is not effective in itself and need to be modified.

Served as a pilot study, this research is aimed to provide a more accurate prediction for future research, and there are some limitations to this research. First, we used a rapid robot prototype instead of high fidelity prototype because of budget and time limitations. Second, the sample size is too small and more accurate conclusion could be made if there’s were a larger sample size.

5. Conclusion

In this research project, our team provided three major robot design features and detailed analysis of the effectiveness of three major robot design features- voice, gesture, and facial expression — in promoting a positive environment under four different humor conditions — control, affiliative, aggressive, and self-defeating. We evaluated the effectiveness of robot mediator with different design implementations using humor in different humor conditions and the comfort level to the person in conflict. The result shows that just using facial expression and voice is the most effective one to create a positive environment, and affiliative humor and control situations provide more positive and comfort than the aggressive and self-defeating humor situation. These findings offer better prediction and evidence for client’s future research in exploring how a robot may be used as a humorous third-party mediator in situations of team conflicts.

Like what you read? Give Ming Cheng a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.