Social Behavior Design

Dan Bayn
User Experience & Behavior Design
5 min readNov 3, 2014

--

How to bring civility to the social web.

The internet is a cesspool. No, worse, it’s a highschool locker room. In certain corners of the internet, antisocial behavior is normative (Gamergate being a recent example). Online, harassment and hate-mongering seem to be the new lingua franca.

Except that there’s nothing new about it. Most computer-mediated communication, from Usenet to Facebook, is designed in a way that creates the best possible conditions for bringing out the worst in all of us. Not intentionally so, but software designers have misunderstood the causes of antisocial behavior for a very long time (Postmes & Spears, 1998).

Arrangements of Social Consequences

Studies by Bernard Guerin (1999) demonstrate that the full spectrum of social behavior, from pro- to anti-, can be triggered by the interaction of two variables: group identification and individual accountability.

Group Identity + Accountability = Prosocial Behavior

Group Identity - Accountability = Antisocial Behavior

Group identity seems to be an amplifier; it increases the rates of both pro- and antisocial behavior. In one study, participants unfairly allocated money (tokens) to members of their randomly-assigned group vs members of out-groups (Dobbs & Crano, 2001) even when they never interacted with members of their in-group! Apparently, just being assigned to a group is enough to trigger our basest tribalism.

Groups like gender, race, and class are ever-present. They’ll assert themselves in your community, whether you design for them or not.

The good news is this: when participants were told that their allocation choices would be recorded and made visible to members of the out-group, biased behavior disappeared! In Guerin’s studies, accountable group members even exhibited more prosocial behavior than accountable participants acting alone.

Tribalism isn’t always bad, but lack of accountability is never good.

Antisocial Software

Most social software messes things up by shielding users from personal accountability. Online, there aren’t even the mild social consequences you have in other forms of communication: hurt expressions, hanging up the phone, people shouting back at you, etc. Never underestimate the power of a sad face to put the kibosh on rudeness.

Most conventional solutions to the problem of online misbehavior miss the mark, because they focus on censoring users individually. They don’t provide any meaningful accountability to the out-group.

Moderators

Since forum moderators act to punish/censor misbehaving users directly, they create little accountability. Consequently, the censored often respond with anger and accusations of bias. Still inside their in-group bubble, it’s easy to imagine they’re being unfairly targeted by out-group conspiracies.

Downvoting

Sites that allow users to provide both positive and negative feedback on each other’s posts create a perverse arrangement of consequences that drive even more antisocial behavior (Cheng et al., 2014). They act very much like the tokens in Dobbs & Crano’s experiment, triggering in-group favoritism and tit-for-tat retaliation.

Real Names

In recent years, the presumed panacea has been to replace pseudonymous handles with real names and real faces. This would “rehumanize” the people whose posts and comments were getting trolled, or so the theory goes. It hasn’t worked and the reason should sound familiar by now: the problem was never dehumanized victims, it was lack of accountability for abusers.

Reputation Systems

Getting warmer. Communities that track prosocial behavior, like providing helpful comments in support forums, can really drive their top contributors. (Yelp is an excellent example.) However, such systems are all about creating individual investment, not public accountability. They could do more to weed out the bad while they’re rewarding the good.

Designing for Accountability

Creating real accountability to out-groups online requires three things:

  1. Tracking metrics for which users should be held accountable.
  2. Making each user’s metrics visible to the community.
  3. Using statistics to detect in-group/out-group dynamics.

The specifics depend on what your community is about. For example, let’s consider a site where users share content and comment on each other’s posts. (It’s out there, I know, but just try to imagine it.) When you read another user’s content, the system can assess your opinion of it based on your response…

  • Endorse — I like this & want to let the author know.
  • Reply — I found this interesting enough to comment.
  • Share — I think other people should see this.
  • Mute/Block — I don’t want to see this content.

If a user tends to post hostile, vacuous, or otherwise undesirable comments, they’ll be more likely to get muted than endorsed. The system can turn that ratio into a visualization and make it part of each user’s public profile. Users with particularly skewed ratios should be flagged and the visibility of their content reduced.

In the example above, the user with both high positive feedback and high negative feedback is likely getting endorsed by members of their in-group and muted by members of an out-group. By calculating the tendency of this user’s comments to be endorsed or muted by the same people, the system can detect such social dynamics.

Again, these metrics need to be made visible to the community in order to create any real accountability. This kind of signalling should act as a warning to both the abuser and others. Just don’t let it become a scarlet letter.

Interesting aside: group-detection metrics could even be used to create a “Devil’s Advocate” view of the content within a community. Instead of showing a user the content they’re most likely to like, show them what’s going on in out-groups that they rarely encounter or with whom they tend to disagree. Even better, automatically surface content from out-groups that has received a high proportion of positive feedback from non-members. Social media doesn’t have to be an echo chamber.

Conclusions

Modern approaches to controlling antisocial behavior online are ineffective, because they’re based on a misunderstanding of its causes. It’s lack of accountability, especially when combined with in-group favoritism, that turns the internet into a locker room.

Effective solutions will have to create arrangements of social consequences that make individuals accountable to out-groups for their antisocial behavior. That can be accomplished by…

  1. Tracking metrics for which users should be held accountable.
  2. Making each user’s metrics visible to the community.
  3. Using statistics to detect in-group/out-group dynamics.

References

Cheng, J; Danescu-Niculescu-Mizil, C; Leskovec, J. (2014). “How Community Feedback Shapes User Behavior.” In Eighth International AAAI Conference on Weblogs and Social Media.

Dobbs, M. & Crano, D. (2001). “Outgroup accountability in the minimal group paradigm: Implications for aversive discrimination and social identity theory.” Personality & Social Psychology Bulletin, vol 27(3), 355–364.

Guerin, B. (1999). “Social behaviors as determined by different arrangements of social consequences: Social loafing, social facilitation, deindividuation, and a modified social loafing.” Psychological Record, 49(4), 565–578.

Postmes, T. & Spears, R. (1998). “Deindividuation and antinormative behavior: A meta-analysis.” Psychological Bulletin, vol 123(3), 238–259.

Daniel Bayn is a User Experience Designer with an interest in the psychology of online social behavior.

You can buy his book on Blurb.

--

--