Design for Well-being: Try Not to Laugh or Smile

Nicholas Simons
Bucknell HCI
Published in
7 min readNov 13, 2017

Introduction

Everyone has at one time or another, watched a “Try Not to Laugh” or “Try Not to Smile” video on YouTube. During one of these challenges, a series of funny videos are shown, and a viewer is expected to watch them without laughing or smiling.

Some people take the challenge very seriously, and try to keep a straight face the entire time, and gain satisfaction from knowing that they never smiled. Others just want to see the funny videos, and laugh at the ones they like the most. How can modern technology in facial recognition and affective computing help to satisfy both of these experiential needs at once?

Demo Video for our Try Not to Laugh or Smile system

Brainstorming

Starting off brainstorming was difficult for our team because we were not sure of how we should use the Affectiva emotion data that we were going to capture. One of our first ideas was detecting what emotions the person was feeling and then throwing up an image which related to that emotion. It would quickly create a collage full of images of the emotions that it was detecting from you. A similar idea that we had was emotions triggering different songs to play. Such as when you feel sad a sad song would play.

However, ideas like these can seem overwhelming when implemented. If the app is designed to be controlled directly by the user’s emotions, a user may feel a lack of control when interacting with it. When designing a user experience, it is important to consider the uncertainty about a user’s goals. People won’t know what they are doing when they first open up the app, and it can be very difficult for someone to control what their emotions are. This could easily lead to a series of actions happening in the app that the user may not have been certain about triggering.

We continued to brainstorm and developed a new idea with its affective elements a little farther removed from the direct user interface. We wanted to promote wellbeing and good feelings so naturally we wanted to go with an idea which would promote laughter. For example, we thought it would be a good idea if people could watch a video, and then be able to go back and rewatch parts of it that they really liked. This way, they could relive good feelings, and the emotional control of Affectiva wouldn’t be an absolutely dominating force of the app. We made a quick conceptual prototype of a web app that showed you a series of funny videos from one of many “Try Not To Laugh” challenges on YouTube, and each time that you smile or laugh, text will pop up telling you which parts you thought were funny.

Development

We had some experience with JavaScript and HTML so we were able to create the skeleton for the website with ease. The difficult part came in when we started to try and incorporate the Affectiva Library to detect whether or not you smiled or laughed while watching the video. To determine this, we tracked the emotion and expression classifiers joy, smile, and browRaise. By adjusting the detection thresholds for each of these classifiers, we could ensure that the user was actually smiling/laughing instead of grimacing or even just twitching their face. We messed around with the various thresholds during development to try and determine what was too sensitive and what was not sensitive enough.

A screenshot of our system, including embedded video and time frame links

When our system detected you smiling or laughing we noted what time the youtube video was at and at what time you stopped smiling. We outputted the time frame where you were laughing to the side of the video with a hyperlink which would bring you to the portion of the video where you started to laugh. Since we wanted to not miss the funniest part of the video, the hyperlinks actually took you to five seconds before so that you would see the setup for the funny part of the video. We believe that this was a great feature to include because it essentially creates a highlight reel for the video where you can go back and see what you determined to be the funniest parts of the video.

During development, we also found that we were often unsure about whether the webcam was detecting a user’s face at all. To account for this, we added a message that notifies a user if he or she needs to move into the camera’s view. We choose to display the message instead of showing the user the actual camera feed of their face because we did not want the user to feel self-conscious. The point of the webpage was to focus on the YouTube video being played, and if you were able to see yourself in the camera then your focus could be taken away from the video.

Formative Testing

We ran some tests on a few users to see how they would react to the videos, and to try and figure out which expressions we needed to detect as laughs. We ran into a couple problems with smiles being recognized when people were speaking. This is probably because we were checking for so many specific Affectiva values, like “joy”, “smile”, and “happy” which could all be individually detected at different rates with many different kinds of faces. So, we decided to look at the value of “browRaise”, or raised eyebrows, as well. We figured that if people are really happy, you can see it in their eyes as well as their smile. However, this made it too hard to detect happiness, because not everyone raised their eyebrows when they laughed.

A user testing our system

We then found an Affectiva value called “valence” which was an overall measure of positivity or negativity in a facial expression. This seemed to work much better, and consistently triggered when people were laughing, and did not get accidentally triggered by speaking.

Strengths, Weaknesses, and Improvements

Perhaps the greatest strength of our project was the enjoyable experience it provided users. The goal of this project was to design for the wellbeing of users, and, since “laughter is the best medicine,” we determined that providing users with a fun and enjoyable experience was a great way to promote their emotional wellbeing. Numerous users who tested our application expressed the pleasure they derived from using the system. “It is a very fun idea,” stated one demo user. “I like how [the system] enhances a viewing experience,” commented another user. As demonstrated by users such as these, the system was clearly quite effective in ensuring the amusement and wellbeing of users.

Our system was also successful thanks to its time-logging feature, which proved to be one of the most praised features by users. “I like … that it allows you to return to the points in the video you know you enjoyed,” expressed one user. This sentiment was shared by several other users as well. We were pleased that this feature was so popular, since we had hypothesized that creating a sort of highlight reel of the funniest moments of a video would be enjoyable.

Example of time frame hyperlinks

Even though feedback for our project was largely positive, there is a glaring weakness within our system: the user interface. Because of the limited development time we had, we were unable to focus on creating an aesthetically pleasing interface. Our users agreed that this was a major pitfall of our system. Specifically, one user expressed that the hyperlinks to recorded moments were distracting. Another complained that “the video was not centered.” We recognize that the web interface was a weakness of our system that we would address in potential future updates.

Given more time, we would also make further improvements. For example, we would aim to allow even more videos to be viewed with our system. As it stands, only three videos are available; however, ideally, users would be able to select any video on YouTube to watch while using our smile and laughter detection system. Several demo users stated that this would be a great direction to take the system in the future, and we tend to agree. Additionally, we would like to improve upon the detection system by allowing it to accurately detect even more facial expressions while users watch videos. After all, videos don’t only make users laugh or smile — some videos may anger, depress, or scare their audiences. It would be interesting and potentially fulfilling to allow our system to detect expressions related to these emotions and others.

Conclusion

At the end of the project, we were very proud of what we had managed to do. With all the features of the Affectiva library it would of been very easy to create a project where the main point was just to highlight the library. Our focus was to make a project which utilized the library while still focusing on the wellbeing of our user. That is why we decided to implement only the features relevant to smiling or laughing as our way of ensuring the specific needs of people that watch Try Not to Laugh challenges. The videos are funny by themselves and our addition of the Affectiva library complimented the videos perfectly. Given more time we could have improved the overall look and smoothness of the project but the core functionality is exactly what we had wanted. The final result was exactly what we had envisioned and we were proud that we were able to implement it.

--

--

Nicholas Simons
Bucknell HCI

Current MHCI student at CMU, former CS & Psych student at Bucknell, future dog owner?