Bringing the “Batphone” to Online Communities to Boost Trust & Safety
Following my post on “Community Mantras” as a tool for reinforcing positive etiquette within online social networks, I’ve been thinking about what else could be done to foster safe spaces for online identity, discourse, discovery, and commerce. Making users, members, or ecosystem participants aware of what is expected of them is the first step, but I believe strongly that social platforms (including online marketplaces) need to do more to enable users to act when they see something that does not feel right. As soon as our digital infrastructure can support the throughput, we need to give users access to the proverbial “Batphone.”
Extending Community Mantras’ Power Beyond Mental Health
On the New York City subway, a mass-market melting pot of people from all walks of life (akin to scaled digital platforms like Facebook, Twitter, Reddit, Pinterest, and Snap), Community Mantras are ubiquitous. Much as I proposed the social networks use a portion of their scarce & monetizable screen real estate to pronounce their behavioral expectations, NYC’s MTA employs inventory that could otherwise be used for prominent ads to drive awareness.

They call the effort “Courtesy Counts, Manners Make a Better Ride,” made up of do’s and don’ts like:

These posters are often accompanied by short audio reminders delivered via their public announcement system, underscoring the same values. While notices do not introduce so much friction into the subway experience to enter the realm of nuisance, the MTA cares enough about reinforcing the right behavior that they make the messaging unavoidable and therefore likely effective.* If indeed these Community Mantras work, it’d be great to see online communities and platforms explore similar approaches.
Bringing Community Mantras Online
Peer-to-peer marketplaces are a flavor of social platform for whom trust and safety are essential, as they often bring together complete strangers. Whether you are buying and selling tickets (ex: Stubhub); walking someone else’s dog (ex: Wag); finding suitable dates (ex: Tinder), or hiring a babysitter (ex: Sittercity), it’s crucial that users feel confident when opening their wallets, homes, or hearts. One way platforms solve this in part by sharing “reputation scores” or “user reviews,” which give users considering new transactions the benefit of the knowledge of previous user experiences. But Community Mantras feel like a still underexplored tactic.

On the heels of its Dec 2019 Saftey Report, Uber appears to agree. For a company that has facilitated 10 billion rides in its history, it’s not surprising that some terrible things have happened across its rider & driver userbases. But for the Uber platform to inspire confidence among its current & future users that they should be able to ride with comfort and without concern, the company knows that it needs to do everything it can going forward to make clear to all members of its ecosystem what is expected of them (and what will not be tolerated). In opening Uber’s rider app this morning, I was pleased to see the company present the “Uber Community Guidelines” (pictured left) screen by default before I could take any action.
To continue to engage with the Uber community via their app, users are asked to attest to the marketplace’s Guidelines, stating they explicitly “understand” the rules. Like the MTA, there’s an opportunity cost to the real estate used to hammer home this message (in Uber’s case, it may be a small hit to conversion vs. forgone advertising revenue). While Community Mantras like Uber’s are likely a necessary tool in cultivating kindness, they’re clearly not sufficient for facilitating trust and safety at scale.
“If You See Something, Say Something”
Perhaps the most wide-reaching & broadly known of Community Mantra’s in the United States is that of the Department of Homeland Security’s (DHS) 2010 “If you see something, say something” campaign. The premise is simple. You can read the fine print of what is and is not an indication of terrorist-like activity, but one of the most powerful primordial mantras that we all have is our common sense. Key to the DHS’s initiative is their coupling of the observation of suspicious activity with action. If something does not look or feel right, lean into your instincts — call law enforcement (a.k.a. 911). Simply recognizing bad behavior is not enough.
Recent personal experiences with suspicious online activity have had me thinking about the way social platforms handle user-generated reports of abnormal behavior. No doubt, community moderation is incredibly difficult. Look no further than Facebook, one of the premier technology companies in the world, positing that it would hire 10,000 new moderators by the end of 2018 to sanitize its community. If Facebook needs humans to solve the problem (vs. doing it via algorithms, which they’d likely be best positioned to develop given their engineering talent & capital surplus), such a task would be all the more daunting for smaller companies. It’s also an area that is loaded with difficult content to consume for any on-staff review team (see the Guardian’s coverage here of FB’s efforts) and often full of ambiguity as to whether or not terms of service have been broken (see Kara Swisher’s plea to have Trump removed from Twitter for violating their terms of service).
That said, if the leading online platforms want to make community safety a bigger priority, they need to make it easier to report suspicious activity and then follow-up with clear messaging to the reporting user about what results to expect. Anyone submitting concerns about other users’ behavior will naturally want to know things like:
- Will platforms share if & when complaints have been officially received and resolved?
- Within what timeline should reporting users expect a case to be resolved?
- Will platforms relay whether the reported behavior was ultimately deemed bullying?
- Will platforms reach out to relevant users to make sure their mental health should not be of concern?
Below is an example of the primary Feed-based user experiences of Facebook, Instagram, and Twitter today. While it’s possible to click through to deeper parts of their products to enable community members to report suspicious behavior, there are no explicit indications that this is something users are able to and encouraged to do (ex: dedicated report 🙋🏽♀️, flag 🚩, or even Batphone ☎️ buttons).

Enter the Batphone ☎️
It’s hard to know what the right solution is here. Putting a call to action into the core user workflow (as pictured below) may cause such a spike in user reports that they’d be impossible to service.** That said, trust and safety are some of the core functions that any online community should offer to attract and retain participants. As most recently seen the leadership of Brian Chesky, Airbnb's CEO, announcing that all listings on their marketplace will verify all listings, platforms need to do more to avoid a potential tragedy. Much as we saw in the wake of the Russian meddling in & disinformation efforts around the 2016 U.S. Presidential Election, enhanced moderation needs to occur using a combination of algorithms, peer-to-peer surveillance, and internal moderators.

If properly coached via online forms & simple software to be specific in their complaints, the P2P reporting approach feels like it could offer compelling enough operating leverage to feature “Batphones” more visibly in community-centric products (the modern, “call 911”). There may even be downstream benefits. First, it may act as a more effective deterrent, reminding potential perpetrators of the consequences of their contemplated actions. Second, it may be a way to further memorialize the Community Mantras persistently into the user experience — a shorthand not just for the list of negative behaviors that will not be tolerated, but also the positive which serve as models for how to treat others.
*If readers of this post have data or relevant studies on the efficacy of various ‘Community Mantra’ programs in online and offline environments, I’d learn about them.
**I’d also appreciate any learnings that the leading social platforms and marketplaces have done to determine which trust & safety reporting and intervention programs have been the most successful and why.
Thanks to my brother, Graham, for reading a draft of this post

