Positive Online Communities: observations on supporting youth interactions

The Coding for All project explores better ways to support youth creativity and community online by talking to experts in the field.

Paulina
Berkman Klein Center Collection
9 min readAug 3, 2015

--

Freedom to express yourself is critical to learning how to communicate with others and productively contribute to conversation. In-person, slight verbal missteps and regrettable comments are quickly forgotten and forgiven, but online, words and thoughts are archived for what seems like an eternity. Though we’re more cognizant (and tired) of the internet outrage machine, there’s always the possibility that our opinions formed as 12-year-olds will follow us.

Although the Children’s Online Privacy Protection Act (COPPA) hasn’t prevented children under 13 from creating accounts and being active online, platforms designed with youth in mind often limit and/or prevent user interactions. On both Webkinz and AnimalJam, for example, users can only chat “freely” (conversations are heavily filtered and monitored) with parental permission; otherwise, users may select text from a dropdown menu or pre-approved chat dictionary.

Taken in-game from WebKinz.com; screenshot updated July 21, 2015

Scratch, a creative community for learners around the world, is different. Developed by the MIT Media Lab’s research group Lifelong Kindergarten (LLK) and used primarily by learners ages 8–16, Scratch is both a visual programming language (with drag-and-drop editing) and a community of learners. Not only do users create, collaborate on, and remix Scratch projects, they can also comment on projects and studios (collections of projects), as well as interacting in the forums, which are incredibly active.

https://scratch.mit.edu/

Under a National Science Foundation grant, MIT’s LLK, the Berkman Center for Internet & Society at Harvard University, and the Digital Media and Learning (DML) Hub at UC Irvine have been working together to explore the creation of interest-based pathways into creative computing (such as hip-hop!). As part of our work, we at Berkman have been considering ways to support and facilitate positive and constructive youth conversation online. Although “trolling” and negative online interactions are certainly problematic for all online platforms and Internet users, youth online interactions provoke opportunities for education and the fostering of digital citizenship.

Through interviews with experts from a number of areas of practice, including large social media corporations, video game design companies, and nonprofits, several cross-cutting themes and observations have emerged on the topics of youth safety (cyberbullying, suicide prevention, and other mental health concerns), privacy, inclusion of international participants, and content moderation practices. Because the number of experts interviewed and evidence offered are fairly limited, the following are offered as initial observations of best practices, and should not be taken as recommendations from the Center. And though we focused on youth experts, many of these best practices can apply to moderating users of all ages.

General Guiding Principles

Online conversation moderation and facilitation should be comprehensively and strategically designed from the outset, considering not only the ideal community norms and behavior, as well as anticipating the consequences for bad actors and inappropriate behavior. Policies for both community norms and consequences of negative activity should be clearly outlined for all users; in several cases, platforms encountered unexpected incidents, and when consequences were created and instituted after the incident, users reacted badly, perceiving the situation to be unfair and arbitrary. Thoughtful moderation policies are needed for all platforms, but there are a number of issues which are specific to youth. For example, moderation is often held to a higher standard, and a platform may need to make decisions about whether and how to involve a parent/guardian of a user. And when focusing on younger users, remember that there are specific kinds of emotional and friendship issues, depending on the age of the user.

Decide on core values

Before creating a moderation policy, decide on your core values. How do you want your users to behave with one another? For instance, one interviewee spoke of the balance between freedom of speech and the community’s general comfort level. How do you want to empower your users? If creativity is valued, Having a concrete set of core values is important, not only as the foundation for additional policies, but also as a benchmark for periodic evaluation of your community and practices.

by Donnie Ray Jones (CC BY 2.0)

Implement carefully but quickly

Many interviewees recounted anecdotes where attempts to make changes to the platform or roll out modifications to privacy policies/terms of service were met with community backlash (Facebook has become notorious for this, and Reddit has also recently met with difficulties). Strive to have good policies in place from the beginning, but if changes do need to occur, getting community buy-in is absolutely essential to ensuring that changes go smoothly.

Collaborate with others

Collaboration between key players in the field to combine efforts in online facilitation and learn from each other has the potential to be extremely valuable. What are others doing? What has been effective? Partnerships with NGOs and non-profits, such as the National Suicide Prevention Center, can help online platforms develop deep insights in specific issues such as suicide or cyberbullying. Working with other platforms facing similar issues can help reform user behaviors instead of redirecting them to other platforms. However, identifying and vetting key players may be difficult, particularly for platforms with international user bases.

Transparency is critical

Be as transparent about moderation policies as possible. How do you want users to behave? What kinds of comments are unacceptable? What happens to users when they act inappropriately?

Clearly communicating such policies not only educates users to become better online citizens but also helps to build trusting and respectful relationships. Policies should be communicated on the homepage, and specifics should be pointed to when users have behaved inappropriately, since users can become upset when it’s not clear why they have been banned or their content removed. Informing someone that “this specific comment violated the ‘appropriate language’ section of our terms of service” is more effective than saying “you have violated our terms of service.”

Stages and Types of Online Facilitation

There are primarily two types of facilitation: proactive, where moderators (“mods”) actively screen the platform for inappropriate content as it is submitted, before it becomes visible to the community, and reactive, where moderators respond to posted content flagged (by users or the system) as potentially inappropriate. To create a safe and productive online community, it’s probably necessary to employ both proactive and reactive moderation. Even if a platform has identified freedom of speech as a core value, proactive moderation may still be important. Suicide prevention efforts, for example, are concerned about the media contagion effect; when designing for youth, consider how a user’s mention of suicidal ideation may affect and potentially harm others.

There are also three stages at which content/user behavior may be moderated: prevention, detection, and response. Succeeding in one stage minimizes the work needed in the next: a good prevention system decreases the volume of inappropriate content to filter and moderate, making detection easier. A well-oiled and transparent system, in turn, can help promote positive community norms.

“I can’t stress enough how important it is for the entire service and user base to know what your policies are.”
(expert, interviewed April 3, 2015)

Prevention

“Your content policies are the heart of what makes for an effective strategy.”
(expert, interviewed April 3, 2015)

The most important best practice in the prevention stage is the drafting and communication of clear community guidelines. Guidelines should be specific enough to give users concrete rules but general enough to apply across various contexts. Mods should be able to ascertain with reasonable specificity what types of misconduct should be flagged.

“There’s a challenge of how specific and how general you want the policies to be, because everything is context-sensitive. With regard to nudity, you don’t want to allow pornographic images, but you want to encourage artistic self-expression.”
(expert, interviewed March 27, 2015)

Community guidelines should be communicated to users in clear and noticeable forms. Perhaps try posting community guidelines as a stand-alone page, separate from the legal terms of service. While having all the rules in one place may be convenient, few adults - let alone young people - read these policies. You could also be more creative with your explanations of community guidelines: YouTube’s guidelines are also available as a video.

Remind users of relevant guidelines when contextually necessary. When a user encounters inappropriate content and wants to report it, a platform can direct the user to its guidelines, to make sure that the content is actually a violation.

Detection

Detecting inappropriate content should be both proactive and reactive. Proactive detection employs technology and mods to actively screen and discover inappropriate content, but if inappropriate content slips through, reactive policies enable a platform to deal with the now-visible content. Accurately distinguishing inappropriate from appropriate content is critical, and a combination of technical and manual moderation will work best, since “appropriateness” can often be context-dependent. While automated filters and other technical tools work well for keywords or images that are always inappropriate, more context-sensitive content requires human attention. Ensuring that mods are trained to understand both the letter and spirit of community guidelines is necessary for a platform’s policy to have teeth, and transparency of mods’ actions will promote community trust. Mods should also be sensitive to the various emotional (mental health, friendship, and relationship) issues of young users. A lot of mods may also be necessary - at one time, half of one interviewee’s platform worked as mods.

“Make users upstanders, not bystanders.”
(expert, interviewed March 27, 2015)

Screenshot from Facebook.com; taken July 20, 2015

Reactive detection may involve finding inappropriate content through user alerts, allowing users to self-enforce community guidelines by monitoring and filtering content and empowering users to take ownership of their community. Reporting should be designed to be easy and intuitive, and though some platforms have volunteer mods, most experts emphasized employing mods and encouraging generally upstanding user behavior.

Response

The goal is not to ban people from the service, but to modify their behavior.”
(expert, interviewed April 3, 2015)

There’s no clear-cut best practice for the response stage, primarily because a platform’s responses must be customized to be appropriate to the specific product and its user demographic (for young people, the age of the user and ability to notify parents are critical factors).

There’s a fine balance between proactive moderation and aggressive meddling; a platform should clearly define the specific cases where they will intervene. Responding to problematic content when it’s not public to the entire community may cause users to have privacy concerns (Facebook scans users’ chats for criminal activity, for one). Handling legal but concerning content such as suicide or self-harm, in particular, requires careful thought on whether and how to deliver helpful resources. In many cases, directing users to helpful third-party resources and experts is preferable to directly delivering helpful resources, because it can be difficult for an individual platform to independently develop high-quality resources. There are, however, some innovative approaches to suicide intervention - on Reddit, some users run suicide intervention subreddits, and Facebook has launched a suicide prevention tool.

Platforms should also respond to user reports in a timely manner by acknowledging receipt of the notice, informing alleged violators of which specific community guidelines have been violated, and giving alleged violators a chance to explain and defend their actions. Where sanctions are imposed on violators, platforms must be able to justify a response by referring to previously created consequences for inappropriate user behavior.

We’re all human

Whatever your response system, remember that online interactions are complex human interactions that should be handled thoughtfully and respectfully. Building trust in a community and between a platform and its users can be challenging, but ultimately rewarding.

Thanks goes to Inji Jung, Jeremiah Milbauer, Ajay Sundar, Chris Bavitz, Urs Gasser, Mimi Ito, and the Scratch team for their contributions and feedback. For more information on the project, please contact Berkman Fellow Paulina Haduong at phaduong (at) cyber (dot) law (dot) harvard (dot) edu.

--

--