Social media’s turning point

5 steps needed now, no turning back

Anne Collier
Jun 8 · 7 min read

We’re feeling the urgency, right? Maybe it’s just that we’re facing more than we ever have, amid a pandemic, nationwide protests against social injustice and a fraught election year. But all of a sudden, small, incremental, single-platform fixes seem like “band-aids,” symptom treatments.

Image for post
Image for post

An example is the Oversight Board that Facebook started. Now its own organization and with its first 20 members, it’s only Facebook’s brainchild, not its baby. It’s about appealing content decisions on just one (admittedly planet-spanning) social media platform. When I first heard and wrote about it, I saw it as an important social experiment. I still do but, now, because it’s “only” addressing content decisions — though that’s a massive topic — it’s far from enough to make real headway into fixing platform abuse and its ensuing social problems all over the world. It’s time: Social media needs to be for the social good.

5 needed steps

Before I go any further, first: Don’t get me wrong. I’m not saying social media is the problem. In fact, it’s a reality of our media environment and everyday lives now. So much good happens in social media, but the good rarely cuts through our brains’ negativity bias in a dark time and it rarely makes the news, because the definition of “news” is the exception to the rule, not the rule. This post is about the exception.

So here’s at least part of the prescription for what we’re seeing right now, at this incredibly challenging point in 2020:

  1. Collaborate at a new level. I know I said that above: Facebook’s “appeals court” initiative was groundbreaking, yes, but also baseline. Twitter’s and Snapchat’s more recent decisions around misinformation were huge. But effecting change platform-by-platform isn’t enough to effect real change. There’s certainly precedent for cross-platform collaboration. The platforms have been working together to address child abuse imagery and terrorism worldwide, the former for years with shared technology called PhotoDNA created by Microsoft. Though those are complex, seemingly intractable issues, we have an even more fundamental media-related problem to solve….
  2. Focus on the foundational issue: the problem of protecting vulnerable people and freedom of expression at the same time. The Internet and the unprecedented clash it created between those two needs really challenged the U.S.’s highest courts for years, starting in 1997. That was when the Supreme Court struck down all but Sect. 230 of the Communications Decency Act, the first law Congress passed to safeguard the Internet’s first protected class—children—from online pornography (I remember a legislative aide in Sen. John McCain’s office who worked on the legislation later telling me they wrote the CDA to test the Supreme Court). As we have seen, the conundrum now concerns all vulnerable groups—and voters—worldwide. Yesterday The Guardian reported that more than 140 professors and researchers are calling on Facebook to “consider stricter policies on misinformation and incendiary language that harms people.” They focused on Facebook because the professors who organized the group and the letter to Mark Zuckerberg had received funding from the Chan-Zuckerberg Initiative, but the problem they’re referring to is not unique to Facebook. To take this on, three more steps are needed….
  3. Build out the help layer: It’s the new “layer” of help people started needing when social media arrived in our lives—the “middle layer” between the platforms in the cloud and users on the ground. Whether Internet helplines such as the one run by Save the Children Denmark, Germany’s “deletion center,” Australia’s Office of the eSafety Commissioner or New Zealand’s NetSafe, these entities provide a vital function in addition to crisis helplines on the ground and content moderation by platforms and their vendors in the cloud. They provide help with online abuse, providing users a much-needed form of “customer care” that the platforms can’t provide and providing platforms with the offline context they need for online abuse reports. There are users in many countries, including the U.S., who don’t have access to Internet help services like these. The platforms need to work together to fill those “holes” in the help layer. As a first step, what’s appropriate in terms of Internet help for countries and regions that don’t have it will have to be studied, and I suggest the platforms should fund regional studies by independent bodies made up of researchers, platform representatives and NGOs in geographical areas where Internet help is lacking. Once the appropriate form of entity is identified (NGO, governmental, etc.), the platforms should help fund the service (which serves them as well as users by screening and reducing abuse report “false positives”) on an ongoing basis. Finally, a kind of trade association of all the world’s mostly nonprofit Internet helplines should be established and supported by the Internet industry, as has been discussed by the UK’s and New Zealand’s.
  4. Grow the Oversight Board. By this I mean all the platforms — at the very least all the global ones (Facebook, Reddit, Snapchat, TikTok, Twitter and YouTube) — should participate in the Board’s work on appeals to content moderation decisions, as well as on the policy recommendations that the Board says it will eventually be making. I suggested this would happen, and now we’re told there has been discussion about it. Let’s hope the collaboration does happen; I suspect it’s the business model of the independent nonprofit entity that supports the Board and thus needed once it runs through Facebook’s $130 million in seed funding. These two forms of user care — help with abusive content in real time through helplines and hearing appeals to content decisions after the fact—are key to addressing the user care side of the 2-part equation of protecting users and free speech simultaneously. So what about the free speech side?…
  5. Commit to continue collaborating & innovating. The work can’t stop with the above four steps. Protecting freedom of expression worldwide against those who would chill, censor or exploit, whether individual or state actors, will take ongoing collective discussion and innovation on the platforms’ part. It will take all the above plus algorithms plus social innovation. The platforms will need to establish internal teams to 1) liaise with their industry peers, helplines and the Oversight Board and 2) keep the discussion and innovation up with changing conditions in media and societies around the world. Call them “Pro-Social Media Teams.” With a commitment to make social media for the social good, they will innovate in response to new problems as they arise.

The only way

As for how we got to this critical point where social media is both a source of despair and a source of comfort, pure paradox, see my latest post at NetFamilyNews.org under “Unplanned obsolescence.”

Some people think there’s no way the platforms could be part of the solution. I disagree. Nobody has more expertise than they do in just how difficult it is to make decisions about the impacts of content, in particular from heads of state — and how to make them with integrity, with the aim to minimize harm to people, free speech and societies. Nobody understands better how to create the tools that support the human beings who make those decisions. They’ve come a long way; they also have a long way to go. We can’t leave it to them, but we need to work with them—”we” being governments, researchers, activists, advocates, helpers and users.

Urgent humility

The need is urgent, but we also need to take the time to work through this conundrum—how to protect people and their rights of expression—with a collaborative spirit and in a multi-perspective way, the only way. For that, we need “a wider range of social perspectives,” wrote Harvard Kennedy School professor Sheila Jasanoff. We need, as she put it,“the technologies of humility”:

For every new technology, we must leave ourselves time to ask how it can best serve humankind. We will find the answers only by remaining critical and by supplementing the forces of government, the market, and ethics with a more humble approach to innovation.

Apply that to Section 230

Section 230, the part of that first Internet law, the Communications Decency Act, that the Supreme Court let stand, is now under greater scrutiny than ever—and has opposition from the U.S.’s full political spectrum. [When it was created in 1996, “Google.com didn’t exist and Mark Zuckerberg was 11 years old,” wrote the New York Times’s Daisuke Wakabayashi last year.] Some politicians, like President Trump, want to remove the part that protects the platforms from being held liable for the content users create on them; oddly, his Executive Order shows signs of agreement with his arch-rival, Speaker Nancy Pelosi (D-CA), who gave her blessing to a House Judiciary Committee’s Antitrust subcommittee investigating whether the platforms have gotten too big.

What is often left out about Sect. 230, including in this New York Times report, strangely, is the part of the law that allows the platforms to moderate that user-generated content, the part that was written to protect children from “indecency.” Deleting that part or repealing Section 230 altogether, as some people are demanding, would mean no content moderation at all. Hard to imagine how anyone calling for more civility online, more child protection or less political bias online would support that. If the liability protection part were deleted, given the number of lawsuits the platforms would face worldwide, it’s hard to imagine social media would survive. How much have we—users, refusers, supporters and critics who use social media—thought about what it would be like to roll back media history in that way? Tell me if you disagree, but we, that “wider range of social perspectives” than just policymakers and industry in Dr. Jasanoff’s words, really need to think about this. Wouldn’t it be better at least to talk about the 5 steps above first?

Disclosure: I serve on the Trust & Safety advisories of Facebook, Snapchat, Twitter, YouTube and Yubo, and the nonprofit organization I founded and run, The Net Safety Collaborative, has received funding from some of these companies.

Digital Diplomacy

Technology, digital, and innovation, at the intersection with government and foreign policy

Sign up for We Are Digital Diplomacy

By Digital Diplomacy

Focus on technology, government, foreign policy and anything in between. Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Anne Collier

Written by

Youth advocate; blogger, NetFamilyNews.org; founder, The Net Safety Collaborative

Digital Diplomacy

Technology, digital, and innovation, at the intersection with government and foreign policy

Anne Collier

Written by

Youth advocate; blogger, NetFamilyNews.org; founder, The Net Safety Collaborative

Digital Diplomacy

Technology, digital, and innovation, at the intersection with government and foreign policy

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store