Stop begging social network CEOs for censorship

Sam Betesh
6 min readAug 1, 2018

--

About a week ago, Mr. Zuckerberg went on Kara Swisher’s podcast to… air out his dirty laundry. Or well, to allow Kara to air out his dirty laundry. And on this podcast, the topic of how Facebook has decided to treat fake news came up, with perhaps one of the most emotionally charged examples possible: Holocaust denial.

Zuckerberg explained that when they come across content that contains Holocaust denial, they essentially remove it from all promotional algorithms to curb it’s spreading on the platform.

I found the reaction extremely disturbing. Rather than seeing Twitter explode with conversations about the potential dangers of social network CEOs deciding what speech to promote and what not to, there was an outcry that Zuckerberg wasn’t doing enough to censor ‘fake news’.

The words extremely disturbing above might read as hyperbolic, especially in this particular context of censoring abhorrent, easily falsifiable claims like ‘the Holocaust didn’t happen’. But I don’t think are, decisions like this one by Facebook to censor Holocaust denial set a precedent that normalizes behavior (censorship) that can be used it far less benign in the future.

Make no mistake, I think fake news is a problem on Facebook and they do need to measures to prevent falsehoods from rapidly spreading and influencing elections, but I don’t think this is the answer.

You might think this precedent isn’t a big deal. It is censorship of things we almost universally agree to be false and damaging, so would a social network ever censor important, less controversial opinions? Things that, might actually be true? Well, we don’t even need to think up a hypothetical future situation —this is already happening on YouTube. That’s what makes this really scary.

About a year ago, Wall Street Journal put out a piece about how ads were showing up on ISIS YouTube videos, sending advertisers into a panic about what videos their ads might show up on. In response, YouTube took a number of measures to prevent this from ever happening again, including removing videos that mention terms like “War”, “Gun”, “Terrorist” and more in their title or description from all promotional algorithms and removing their ability to place ads on their videos.

At face value, this might seem harmless enough, especially if you’re used to watching entertainment, cooking, or gaming videos on YouTube. But I watch a lot of political content on YouTube, and saw first hand how this nearly cut in half the revenue and views of political commentators. These political channels provide important commentary on things like our 17-year long war in Afghanistan or our military aiding Saudi Arabia with their attacks on Yemen, topics that videos on, titled appropriately, YouTube will now censor.

And these political commentators on YouTube are more important than ever in a time when mainstream outlets chase sensationalism (that they think are helping win their side elections) and follow orders of network presidents who are playing games of political chess, probably directed by their Senator friends.

Think those are exaggerations? Not in the slightest. Regarding sensationalism, 3 days ago Adam Johnson of Salon excellently covered that MSNBC mentioned Stormy Daniels in 455 segments in the last year but hadn’t mentioned our attacks on Yemen in a single segment during that same period. Regarding network presidents playing political chess rather than doing honest reporting, Ed Schultz dropped a bombshell about 3 months ago when he announced in an interview that minutes before going live with Bernie Sanders to announce his presidency on his MSNBC show, MSNBC president Phil Griffin called him and told him he would not be going live. Ed Schultz was fired about 6 weeks later, and said he believes it was due to his support of Bernie Sanders, which many theorize is opposed in democratic circles for fear of losing centrist voters (I disagree completely and think this attempt to have a ‘big tent party’ that’s center-left instead of just left is the reason many feel the Democrats stand for nothing).

So, I mentioned before that I think Facebook does need to address the problem of fake news. And while I’m not a believer in the idea that you can’t point out a problem with having a solution to recommend as well, I actually do a have a recommended solution here. I think Facebook and YouTube should take a look at any piece of content that gets flagged enough, or that reaches enough people (I think 1 million views is a good number), and if it’s political in nature they should have fact checkers review the content and write up a report evaluating the claims made in the content. Then, below these pieces of content, a warning should appear like this;

WARNING: This content was flagged for review due to claims it contains false statements. Our fact checkers have reviewed the statements made and the evidence we were able to find both supporting and debunking the claims, and we’ve published a report evaluating them here.

I know some readers will have a visceral reaction to this recommendation, thinking that this is not enough. They will likely think plainly false statements, like those amplified by the Russian government (which was likely part of an effort to get Trump elected), must be eradicated entirely. To those people, I beg you to reconsider. We need independent news outlets questioning mainstream narratives that have holes in them, such as:

Earlier in this post, I I understand the intuitive, visceral reaction to something so horrible and fake being spread is to stop it from being spread. That is probably the prescription I would have come up with as well had it not been for me becoming an absolute political junkie the past four to five years, or perhaps just had I not been so closely following YouTube’s response to the ‘Adpocolypse’.

I’d also like to conclude with something I’ve been pondering lately I’d encourage you to comment below with your thoughts on:

Facebook, Twitter, and Google are private companies that have every right to censor individuals. But having the right to do so, does not make it right. And I believe that due to the strength of network effects that protect these businesses, perhaps they should have to uphold some sort of equal treatment of speech by law. That, however, is a more complex question.

Added 8/14/18: About a week after I wrote this, YouTube, Spotify, and Apple all began removing content from Alex Jones from their platforms on the grounds of it being hateful content and containing harassment. If this was virtually any other political commentator, I would have to stand by them and say that many political views borderline on hateful, so beginning to censor that type of content is a dangerous gray area. However, I’ve long believed Jones actually does incite violence, so I agree with their decisions. The only speech I think warrants removal is speech calling for (or dog-whistling for) violence.

You can see clips I think warrant his removal from platforms here and here, and you can also learn about his two fans who killed two police officers in Las Vegas a few years ago here and a young fan of his who killed 5 people in a mosque shooting several years ago in Canada here.

--

--

Sam Betesh

I help brands works with influencers and make ads that convert like crazy. Clients include Uber, Airbnb, Fabletics, Imperfect Produce, and more. Ex-YouTuber.