Designing to Stop Cyberbullying: Future Directions
Where can we go from here?
This is the second part of a two-part series on how design can help bystanders stop cyberbullying. You can read Part 1 here.
Today’s top social media sites work to stop cyberbullying with a combination of prevention and intervention tools. They provide resources to educate users and empower bystanders to intervene against aggression. But what else can be done? We’ll also discuss how social media sites can help further empower bystanders through user-centered design interventions.
Designers have already created new online tools and applications to help prevent cyberbullying. Some are turning to machine learning techniques. These tools teach computer programs to detect certain words or behaviors, such as bullying language on social media.
One research team applied this technique to posts on Instagram. First, they asked human participants to decide whether images and their captions were benign or hurtful. Then they used the features that the humans found to teach their program to distinguish regular posts from aggressive ones.
Right now, social media sites don’t remove aggressive posts unless they’re flagged or reported. But they’ve spent years recording which posts actually do get reported. They could treat these reports just like the Instagram researchers treated their benign/hurtful categories. That is, they could use them to help automatically decide whether posts are aggressive or not. This could help sites stop bullying without waiting for users’ reports.
Automated reporting has already been tested in the real world. One group putting these tools to work is the UK-based Samaritans. The organization gives phone or in-person counseling to people suffering from emotional distress or suicidal thoughts. Their Radar app connected with users’ Twitter feeds, scanning them for language related to distress. It then notified users when their friends seemed to be struggling. The goal was to keep bystanders aware of their friends, and help them offer support if necessary.
The technology was a success, but the Samaritans shut the app down just a week after its release. Radar was good at suggesting that users respond to others’ tweets — too good. Users found that the app had shown their angry or sad tweets to people who hadn’t been the intended audience. They also found that it alerted their friends to tweets that were just venting feelings, rather than actual requests for help. People even received alerts to act on tweets sent by strangers.
Despite their good intentions, the Samaritans failed to take into account how people actually use Twitter. Radar’s designers forgot that not everyone we follow, or who follows us, is a friend. Even if we only follow friends, we might not want them alerted to our every mundane online complaint. Users criticized Radar for invading privacy and violating norms of Twitter use. These criticisms, whether warranted or not, got the app shut down.
Failure or no, the Radar project shows us that prompting bystanders to intervene online is possible. But making these tools work requires that designers keep users and platform norms in mind. If intervention-promoting technologies can’t put users’ needs first, they’re destined to fail.
Co-Designing for Feedback
More recent design ideas have taken a more user-centered approach to helping bystanders intervene. This means they’re putting users’ needs first when they come up with new ideas. They’re also considering social and site norms before launching new tools or apps to large audiences.
A team of researchers at the University of Maryland, led by researchers Zahra Ashktorab and Jessica Vitak, is doing just that. They realized that online bullying among teens often involves complex and nuanced social interactions. So they sought help from some experts: teen social media users themselves. The researchers worked with groups of high school students to come up with new ways to stop cyberbullying.
The student designers saw that many bystanders fail to report bad behavior at all. They think sites don’t take reports seriously or won’t act on them. And when users do intervene for others, they can’t be sure whether those people get the help they need.
To solve this problem, the teenage design teams came up with an intervention idea called “Reporting Bullies With Feedback.” This feature would send action updates to users who report bullying. It also assures users that the site is showing ongoing care for victims. “When [users] alert the site about negative content, they want to be notified not only of the abuse, but also receive feedback about how the situation was being handled and the additional information about the victim post-abuse”, say the researchers.
A few major social media sites are starting to add features, like this one, that show users they’re committed to taking action. This trend could catch on, helping both victims and bystanders take action themselves.
Crucially, this idea’s teenage creators put the needs of users first. They didn’t just come up with a tool (like the Samaritans’ Radar) that’s useful in theory but disastrous in practice. They thought of a way to help people help others that doesn’t violate the norms of social media use. This user-centered way of thinking is vital when designing tools for stopping cyberbullying.
We’ve heard a lot of buzz about building new apps and tools to help bystanders stop cyberbullying. But what if bystanders could get help from social media features that already exist?
One feature that designers could tap to counteract cyberbullying is Facebook’s new “reactions” feature. This expansion of the “like” button lets users show love, excitement, laughter, sadness, and anger. It now takes just one click for anyone to show their emotional response to something they see. That’s exactly what could make it useful to both Facebook and its users in the fight against online aggression.
It’s easy to see how Facebook could use the “reaction” information to capture peoples’ responses to negative content. This could help Facebook better understand how readers feel about certain kinds of posts. It could even help Facebook detect and block negative posts before they’re sent.
The reactions feature is a great way for Facebook users to express their feelings about posts they see. The fact that it’s so easy to use is important, too. Standing up to online bullies can be difficult or risky. But the reaction buttons are an easy, safe way to show disapproval of negative content. They’re also great for showing support to victims. could use the negative reaction buttons — sadness or anger — as a way to express disapproval of aggressive content.
Features of social media sites like Facebook’s Reactions can help both the sites and their readers deal with online bullying. Using existing features instead of creating new ones has advantages, too. It’s cheaper for the sites themselves to adapt features to new uses than it is to create new features. And site users won’t have to adapt to using something completely new when they want to help stop an online bully.
What have we learned?
Today’s social media sites each have their own tools and resources to combat cyberbullying. Facebook, Instagram and Twitter give users support for dealing with online aggressors. They also have opportunities to report negative content. But these sites still fall short when it comes to helping bystanders intervene against cyberbullying.
We believe there are ways that design can help stop online aggression. Adding automated alerts for negative posts can help increase awareness of bullying or online distress. Better feedback from sites can encourage users to report aggression or harassment. Finally, existing designs can help support low-risk interventions.
As we’ve seen, designing to help bystanders takes careful planning. It also requires sensitivity for the ways people use social media. Still, there’s no shortage of ways to empower bystanders to stand up against online bullying.
Want to know what social media sites are doing right now help bystanders intervene to stop cyberbullying? Check out Part 1 of this series here.
Social Media Lab graduate student Franccesca Kazerooni and undergraduate Research Assistants Dani Boris, Jordan Jackson, and Olivia Wherry contributed to this post.