Meet AffecText: The new chat system that also conveys emotions of the user

Anmol Singh
Bucknell HCI
Published in
9 min readNov 13, 2017

For this design sprint, we designed a chat system for users to chat with another user. Our chat system uses the webcam to see user’s expressions and connect it with 5 emotions: joy, sadness, anger, disgust, and fear. In addition, the chat system highlights the text to the color of the emotion being detected while typing. The system also shows the other person’s emotion through the same five emotion parameters, along with the an emoji portraying the other person’s facial expression. Here’s a look at the demo video of our chat system, AffecText.

Video 1 — Shows the working of our chat system, AffectText.

Our generation (which was our target audience) lacks the ability to empathize since we grew up with digital interfaces around us, thus lacking emotions. Our system supports wellbeing by promoting better understanding between two people using textual communication along with emotion tracking. In addition, the user also gets to see how exactly they are feeling and become self-aware of how their messages might be interpreted.

5 Sheet Design Process

For this design sprint, we used the 5-sheet design approach.

Sheet 1 : Brain storm

We first started brain storming some ideas. It didn’t take long before we encountered our first obstacle. What did ‘well being’ mean to us? This sparked discussion among the group members. Well being, in our definition, was anything that would improve the user’s personal awareness while using our product. If they were ‘feeling down’, we didn’t necessarily aim to make them feel better. That was not within our definition of well being. We instead aimed to make them aware they were ‘feeling down’, and viewed this self-awareness as an essential device to encourage well being. Some of our brain storming ideas are given below.

Figure 1 — Sheet 1: Showing the Initial brainstorming ideas.

We first drew inspiration from the color changing background application provided to us. This seemed like an interesting mood device, but we weren’t entirely sure how it could contribute to the user’s awareness. Instead, we opted to provide a platform for users to partake in a form of communication that is essentially second-nature at this point.

Sheet 2, 3, & 4: Initial Designs

After finishing up brainstorming, we decided to see how these ideas would look on paper. We came up with three initial designs and they all consisted of three main ideas: having a webcam show the user’s picture, a place to enter the message being sent, and a chat window showing the messages from the user and the person he/she is chatting with.

Figure 2 — Showing the initial design of Sheet 2.

Sheet 2 shows a very simple design, where the text will get highlighted to the color corresponding to the emotion of the user. However, we felt the design was too simple and wasn’t doing much.

Figure 3— Showing the initial design of Sheet 3.

For Sheet 3, we had some extra space to show the change in background color for changing emotions of the user. In addition, we also assigned an emoji to each text being typed, depending on the facial emotion of the user.

Figure 4— Showing the initial design of Sheet 4.

Sheet 4 had a very similar idea with Sheet 3, but instead of each text/word having an emoji, the entire sentence would have an emoji to express the facial emotion of the user. After evaluating the strengths and weaknesses of each, we decided to go to Sheet 5.

Sheet 5: Realization Design

Looking at Sheet 2, 3, & 4, we came up with an idea of combining the good ideas and adding emotion bars to display emotions, instead of changing the background color. The realization design is given below.

Figure 5 — Showing the realization design in Sheet 5.

This design consisted of having a webcam image of the user, a chat area, and emotion bars to show the users emotions. We had also left some space that we thought to fill up with background color of emotion being detected by Affectiva.

Development

After finishing the design phase, we went into development of the realization design and getting feedback from users.

First Iteration

With our data from realization design, we built our first prototype, as shown below.

Figure 6 — Showing our first prototype from realization design.

It was a simple chat system, which showed only the user face with all 9 emotions returned from Affectiva. We didn’t remove any of them, because we believed the more information for the user the better.

Figure 7 — Showing all the 9 emotions: joy, sadness, disgust, contempt, anger, fear, surprise, valence, and engagement.

Each time a user types in a letter, the emotion associated is saved. When the user finishes typing the entire word, the system selects the emotion with the highest frequencies and highlights the background of the word with the associated color. The reason we decided to show the emotion behind each word was because we wanted to show the emotions of the users in as much detail as possible. However, showing emotions for each letter makes a message looks fragmented, and also increases the error of the system tremendously. After all, Affectiva is not even close to perfect, and its data is not much better. Therefore, we didn’t want to depend on its accuracy too much.

Figure 8 — Showing how a word gets highlighted depending on the emotion of the user.

As you can see in Figure 8, there is a small box under the message input section to show the resulting highlighting so that users gets instant feedback from the system of what the system is doing. Moreover, we also want to make sure that users are well-aware of what they are conveying through the system. This also gives users the option to discard messages rather than sending them; due to the possibility of users being uncomfortable with parallel communication, we considered this functionality vital.

The right section of the website was left empty, as we haven’t decided on what to put in there yet. At the point, the colors behind the emotions were completely random.

Initial User Testing

After coding up the first iteration of the system, we went on to conduct user testing and received some helpful feedback from our users. One person commented that there should be a name somewhere to indicate who they were chatting with. A few people complained that the color scheme for the emotions were terrible, which was very much expected since we didn’t work on it properly yet. Most of our test users asked us what was the purpose of the system and what should they do about it, which indicated that we needed a form of instructions. The rest of the feedback was fairly positive.

Final design

Incorporating some of the user feedback, we came up with the final design as shown below.

Figure 9— Showing the final design for the chat system, AffecText.

As one can see, we further refined our system. Four emotions were taken out, which were Valence, Engagement, Surprise and Contempt. Valence and Engagement were taken out because they weren’t emotion indicator; Surprise and Contempt were taken out because they are rarely shown. We ended p keeping just 5 emotion: joy, sadness, disgust, anger, and fear, as shown below.

Figure 10 — Showing the 5 emotions we decided to keep in our design.

The final iteration of the system consists of a webpage divided into three sections. In the center, a chat window shows the local user’s messages in blue boxes, while messages from other users appear in white boxes. When a message is sent, the system uses Affectiva to detect the prevalent emotion shown by the local user, and highlights the message accordingly. The emotions and colors (see Figure 11) were chosen to match the emotion characters from Disney Pixar’s Inside Out (2015).

Figure 11 — Chat instructions, showing the emotions and their corresponding colors.

By choosing these colors, we hoped to tap into the association of emotions with colors used in Inside Out, both in the sense of the research presumably done by Disney Pixar on natural color/emotion associations, and in the sense of associations created by the film itself.

On the left and right sides of the screen, information about the local user and the other user using the system is shown in a roughly mirrored way. On the left, we chose to show the user’s webcam feed, so that the user would be able to tell if their webcam could see their face without being obstructed by something like bright light. On the right, however, we chose not to show the webcam feeds of other users, due to privacy concerns, so we used Affectiva to abstract the other user’s face to an emoji, as shown in the figure below.

Figure 12 — Showing the other user’s emotion as an emoji.

On the lower halves of both sides, the values of the five emotions are shown symmetrically for the local user and the other user, respectively.

We decided to switch to highlighting the color behind each sentence instead of each word (the technicality of deciding which color to show was the same as before) for two reasons. Firstly, we realized Affectiva is even more noisy than we thought, so switching to a sentence-level highlighting would help us further reduce the error of the system. Secondly, we realized that even if a person could change their emotions abruptly, they normally wouldn’t change it while typing a sentence.

Figure 13 — Showing the highlighting of the entire sentence.

In addition to not showing a user’s webcam feed to the other user in the chat, we were also concerned about getting users’ consent to access their webcam and analyze their facial data. To that end, we created a splash page for the chat which informs the user of what the page does, and asks whether or not they would like to proceed to the chat or not.

Figure 14 — Showing the splash page with privacy disclaimer.

Results and Feedback

From the demo day, we learned that some of our choices had worked well, while other areas could use improvement. In terms of concept and general execution, the majority of the users expressed that they liked having a system that could inject emotional data into text conversations. However, there were some users who did not see that our app was attempting to promote wellbeing by facilitating emotionally-aware conversation, so perhaps we could have made that connection clearer. Other shortcomings, as expressed by the users, were more specific: among these were things like further refining the UI/design and the imperfect nature of Affectiva’s emotion detection.

Future improvements

From this user feedback, and from our own work and experiences with the project, there are several areas which we would improve given more time. Several users expressed a desire for tracking and displaying more emotions, rather than just the five we chose to focus on. While this is somewhat limited by the emotions Affectiva is capable of detecting, we could certainly bring back surprise and contempt, which we cut while defining a focus for the project. Another area which could use refinement is message highlighting. In order to better capture user’s emotions, our system could be altered to highlight in real time, and to operate on sentences or sentence fragments, rather than on whole messages. Finally, the given the positive feedback from users, the scale of the project could be increased beyond two users. Taking this project as a proof of concept, the chat could be integrated into a more polished site as its own thing, or possibly integrated into an existing chat service.

Conclusion

We created a chat-system that not only allows text, but also emotions to be communicated, thus further closing the communication gap that exists in textual form of interactions. By doing so, we were able to accomplish our goal of designing for wellbeing. The instructions were fairly simple and the users understood the chat system in no time. Our concept was well appreciated among the users who wanted more out of the system in the future. Given more time, we have possibility to further extend the functionality of our product along with increasing use cases.

To check out AffectText, click here.

--

--