Design for Wellbeing: Emotion-Driven GIF Generator

Jenna Slusar
Bucknell HCI
Published in
9 min readNov 12, 2017

In this design sprint, our team was tasked with using Affectiva’s Libraries to design a website that responds to user emotion, with the end goal of promoting wellbeing (shown below).

You can use our website at https://allanla.github.io/DesignForWellBeing/

The website uses emotional intelligence to decide the theme of the image that appears on screen. GIFs, memes, and inspirational quote images were chosen to better the user’s emotional state because no matter what their original emotion is these images will always make the user smile.

Getting Started

In the beginning of this sprint, our team decided it was important to first understand Affectiva, so that its limitations were known. After spending some time reviewing this library, the team wanted to come together as a group to start brainstorming some ideas to design for wellbeing. To prepare for this meeting, team members looked at existing companies that are already in this industry. This was done to get a feel of how they are approaching this issue. Once we looked at startups such as Smiling Mind, Muse, Spire, and more, we had ideas of what we wanted to do by following similar approaches.

These ideas include a meme website with a meme generated based off of a user’s current emotional state. The meme is suppose to guide them towards another emotional state. For example, if a user was tired, a meme would be generated to make them more energetic.

An example of a meme that would map the user to Joy.

Another idea a team member had was showing inspirational quotes. An example of how this can be applied is if a user is sad, an inspirational quote will be shown focused on overcoming sadness (shown below).

Lastly, showing GIFs based off of the user’s emotions was also proposed. However, if the GIF library were to be used, this would be somewhat more complicated to implement. This would not be a simple image upload, so this would incur a learning curve for our team.

Our team had difficulty deciding on which avenue to take for the website. Each image is capable of promoting the user’s well being. Thus, the team chose to implement all three ideas with the easy interface of separate buttons to either generate a GIF, meme, or a quote.

User Feedback

Before the team started to implement this idea, team members asked our community what they thought the best approach to promoting someone’s well being is. The approaches were:

  • No matter what the user’s emotion is show them a happy image (doesn’t use Affectiva affectively)
  • Show an image that will change the user’s emotion to the better, opposite emotion (ex: Surprise → Calm, Sad → Happy)
  • Always show a happy image to the user, but base the theme off of their emotion (ex: Sad → funny GIF of Kim Kardashian crying)

Interviews

Approaching users with our ideas, we received some helpful feedback that also complicated the project. Many people said that it would be difficult to gauge exactly what feeling of meme that they would want to see based on their mood. Everyone reacts differently, so it is unclear whether or not someone sad might want to see a meme that is more light-hearted or that involves dark humor. The same holds true for determining what might cheer people up who are angry.

Another topic of discussion was the fact that there are different levels of emotions. Not everyone is the same amount of sad all of the time. Different situations might leave someone slightly unhappy or extremely distraught. At times people might be angry and other times completely loathsome. Being able to cater to a spectrum of emotions will be an important part to determine what memes and quotes our team would want to use.

Thus, Everyone responds to emotions differently, so the team conducted a bit more research and testing to figure out what memes most people would like to see given their current emotion.

Wizard of Oz Testing

We based our further research on the principle of, “Employing socially appropriate behaviors for agent−user interaction” (Principles of Mixed-Initiative User Interfaces, Eric Horvitz). Thus, the team established our final product on the user feedback phase because it’s better to use visual evidence to figure out appropriate behaviors. In using this logic, we decided to do Wizard of Oz Testing (WOT) in order to gain this evidence early on. This process works as follows:

  • Define user input actions
  • Have a participant use those actions to interact with the website
  • When the participant performs an action, the controller hits the corresponding button (or move the mouse)
Our teams version of Wizard of Oz Testing

The above picture portrays how our team used this method in our project. The user looked at this website on the left screen, so that we could get their emotion from Affectiva’s raw data. Once the user made an emotion, we prompted them to say whether they wanted a GIF meme, or quote. Then, a team member pulled up the one the user chose, on a different computer, based off of their emotion.

This showed our team some problems that could possibly arise:

  • What to do when the person shows no emotion
  • User does not know what emotions Affectiva picks up
  • Most of the time the user will not show any emotion other than happy, unless prompted to do so
  • Certain emotions were hard for Affectiva to detect

However, we did get some good feedback as well, which was that the user thought the memes were a good idea for their generation. Thus, they are socially appropriate behaviors to promote wellbeing. To fix the problems above, if there is no emotion detected, then it will be mapped to Happy as the default. The team also decided that fear and contempt were too difficult for a user to portray and Affectiva to detect, so our website only detects Joy, Sad, Disgust, Surprise, Anger.

Lastly, since the user expressed that they really liked the memes, the team decided to error on the side of making users laugh. Thus, decided to map all emotions to happy, just with different themes (based off of the user’s emotion) because you don’t know exactly what emotion someone would want to feel to make them feel better.

User Testing

Student and professor laughing while trying our prototype.

After getting the first prototype working the team got straight to getting more user feedback. This time our team members focused on the principle of, “Considering uncertainty about a user’s goals” (Principles of Mixed-Initiative User Interfaces, Eric Horvitz).

With this in mind, the main problem that was found was that the user’s did not understand that their emotion changed the theme of the images. To fix this, the the team wanted to implement a change in background color of the site based off of the user’s emotion.

Image our team used to map color to the user’s emotion.

Once the design decision was made to change the background color based on the user’s emotions, team members needed to research which colors would best fit the recognizable emotions. Color is an important design decision and websites such as this one provide information on emotional responses to certain colors.

For example, the site says “Yellow captures the joy of sunshine and communicates happiness” (Mihai), making it a fitting background color for users that portray Joy. To confirm these emotions, we used the color wheel seen above, which describes a range of colors and what emotions they represent. Each emotion was assigned to a color, hard-coding in the hexadecimal values for the appropriate colors found through our research.

Joy → Yellow, Anger → Red, Disgust → Purple, Sad → Blue, Surprise → Light Blue

Our Site

Team members began by building the site based off of source code from this Affectiva site (mentioned during the initial user testing phase). The team eliminated the video of the user, the output of the raw data (except the emotion data), and the log messages. From here team members began writing code to capture the raw JSON data from Affectiva so we could begin to parse it in real time. Eventually, our team was able to successfully capture this data and save the top emotion to a variable.

Max emotion found, excluding four emotions the site will not track (as explained in ‘User Feedback’)

Next, team members added buttons on the site to generate GIFs, memes, and quotes. Upon clicking one of these buttons, a function named findMaxEmotion (show above) is called, which uses the Affectiva service to return the top emotion in the form of a string.

“Allowing efficient direct invocation and termination” (Principles of Mixed-Initiative User Interfaces, Eric Horvitz).

The team achieved the above principle by, including three buttons labeled Start, Stop, and Reset. The start button triggers the detector from the Affectiva library to start analyzing the user’s face. After the detector responds with success, three new buttons emerge. The first is labeled Generate GIF, the second is labeled Generate Meme, and the third is labeled Generate Quote. Once one of these buttons are clicked, a function is triggered that first acquires the max emotion the user is displaying. This emotion is then used to fetch a corresponding GIF meme, or quote to be displayed on the screen.

The initial states of the site.

For GIFs specifically, a “GET” call is made requesting 35 GIFs using the Giphy API and a JSON response is returned. For this call, the team used the ‘Search’ function of the REST API by supplying the max emotion string to the API call. After parsing this JSON for one random GIF object, we acquire the embedded code of that specific GIF. This embedded code is then appended to an html div within the index.html file, using iframe. The user then is displayed this GIF. For memes and quotes, we simply pick a random meme/quote from the local directory based on the max emotion and then append the image to an html div within the index.html file.

Thus, the core of the site was finished, but after further user testing (explained above) the team decided to add background colors that change according to the user’s emotion detected by Affectiva. This was achieved by checking the maxEmotion variable, changing the body.css background color to the respective color (mentioned in ‘User Testing’), and having it change when the button is clicked.

First prototype on the left, final product on the right.

Final Thoughts

Our team felt that the goal of the project was reached, which was to promote wellbeing. This is based off of what team members observed on Demo Day and the positive feedback that was received. Users liked the simplicity of the website with the team’s choice to use GIFs, memes, and quotes because the format cheered them up.

However, there were a few aspects of the project that the team could have improved on by adding:

  • Real time showing of the name of the user’s emotion from Affectiva
  • Key to clearly show which background colors mapped to which emotions
  • Instructions to explain how the service works and how the user should interact with it

If there was more time allotted to this project, our team would have implemented the above because it supports the principle of, “Employing dialog to resolve key uncertainties” (Principles of Mixed-Initiative User Interfaces, Eric Horvitz). Thus, our project would be more user friendly because the user would be less confused by the uncertainties, which would in turn promote more wellbeing.

Nevertheless, our team is extremely happy with the product, and encourage you to use the site to better your day!

--

--