Designing social comparisons to improve performance

Lucas Colusso
HCI & Design at UW
Published in
4 min readJun 23, 2016

If you play games, use a Fitbit, or look at your electricity bill, you may have seen a graph or a score that compares your performance to someone else’s. Designers use these comparisons to motivate people who see them. In this post, I summarize how to use small text cues or changes to the visuals to make the comparisons more effective. This is a summary of a paper (download link) published at the 2016 CHI Conference!

It is common for people to compare themselves with others. These comparisons can motivate people to improve themselves. Social comparison information is often shown in health applications, financial tools, or games. For example, the “Hall of Fame” screen in arcade video games typically shows the highest scoring players. However, while comparisons to top-ranked players can motivate people, they can also undermine their performance, leading to jealousy or low self-esteem when the comparison is too extreme.

For example, consider Lucas, who just got a new physical activity tracker. After using it for a day, he connects to the website and sees that Daniel has WAY more steps than he can imagine achieving, which makes the comparison daunting. In this way, designers may be inadvertently demotivating their users.

People are more motivated by comparisons with others who are closer to them in respect to opinions or performance. A positive evaluation of comparisons is more engaging and triggers action. Therefore, I created and tested a novel type of comparison called closeness to comparison, with two design changes that bring users closer to their comparison targets. The intention is balancing the challenge perception so that people positively evaluate their chances of success.

To test the usefulness of the closeness to comparison feedback, I asked people to play Flappy Bird and showed them 3 types of feedback: the common version with a comparison to the leader and 2 closeness to comparison designs. Based on the results of this experiment, in which 425 people participated, both our feedback strategies improved game performance, but only for experienced gamers. The two strategies are:

1. Target Group Similarity. Compare people to those who are similar to them, in regards of experience or other aspects such as location, age, interests.

2. Visual representation of Performance. Subtly distort people’s visual feedback so that people’s activities look closer to the comparison target.

Below, I explain how to apply these insights and the effects they had on players in our game.

1.Target Group Similarity

This design increased scoring of experienced players on an average of 5 points.

Compare people to others who are similar to them and communicate that similarity to your users. One way of doing this is instead of just showing extreme comparison goals to your users, label the scores as coming from someone who is similar to them. Alternatively, rather than displaying the score of top-achievers, you can simply use the best score of players who are in fact similar to your user. Similarity means same experience level, same location, age or gender, shared friends, or other factors associated with the activity being shown in the feedback you designed. An important detail of our study is that we always used the same scores for comparisons, and the only thing we changed was to tell users the other player was similar.

2.Visual representation of Performance

This design increased scoring of experienced players on an average of 3 points.

Designers can subtly upgrade the visual representation of user’s feedback so that their scores appear closer to the comparison score, especially when the user’s score is very distant from the target. In our study, we used a simple log transformation to skew the visual representation of performance.

Both strategies increased the performance of players, but only if they were experienced gamers. We aren’t sure why these changes did not work for inexperienced players, but it may be that they approached the game with different goals, and so the feedback was less relevant, or that they did not care as much about the game.

Our paper, with detailed information about this study, is available online and was published at CHI’16. If you want to know more, disagree with the paper, or have ideas you want to share, please reach out Lucas Colusso at colusso@uw.edu.

Lucas Colusso, Ph.D. student in HCDE at the University of Washington.
Read more posts from HCI & Design at UW here!

--

--