Social media has proven itself an influential player in our society, not just as a communications tool, but increasingly as a primary source of information and decision-making. Because of this, some people have undertaken efforts to ensure that discussions on our media are productive, safe, and do not lead to a resurgence of Nazism or thermonuclear war.
Easier said than done.
In this article, I’ll examine a case study that demonstrates the problem, and I’ll propose a solution (half-baked at best) for how to make Twitter a less hateful place.
I’ve chosen Twitter because it’s the platform of choice for Donald Trump — and for yours truly. A less-widely known fact is that it’s also the favored platform of journalists due to its simplicity, speed of use, and chronological layout. This is important for our discussion of “newsworthiness.”
Though some may scoff at social media as a frivolous avenue for sharing of viral humor, online forums have become far more than that. Many important events first rear their heads on Twitter, making the platform’s mechanics important to determining what information the public sees and what it doesn’t. So to start this exploration, we’ll look at the platform’s mechanics.
Let’s go where no man has gone before — into the Twitter Terms and Conditions.
TWITTER TERMS AND CONDITIONS
Here is the tweet for our case study.
This tweet from Donald Trump depicts his apical threat against North Korea. By saying “they won’t be around much longer,” Trump clearly levels a threat against the entire nation of North Korea — a credible threat, considering Trump is one of only a handful of people on Earth actually capable of making good on it.
After this tweet, anti-massacre activists (read: non-sociopaths) began asking whether threatening to kill 25 million people constituted a violation of Twitter’s terms of service. And at a glance, it did.
Twitter has extensive provisions against hate and abuse on its site — likely more extensive than most people know.
In regards to unlawful use, Twitter says in its Rules and Policies….
You may not use our service for any unlawful purposes or in furtherance of illegal activities. By using Twitter, you agree to comply with all applicable laws governing your online conduct and content.
They also have a hateful conduct policy saying….
You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. Read more about our hateful conduct policy.
But most relevant to Trump’s threat is the anti-violence clause that Twitter has, saying….
You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people. This includes, but is not limited to, threatening or promoting terrorism. You also may not affiliate with organizations that — whether by their own statements or activity both on and off the platform — use or promote violence against civilians to further their causes.
So far, Twitter is pretty clear that credibly threatening or declaring war is not permitted on their site. And if you or I were to do it, the result would be the deletion of our account — and hopefully a knock on the door from law enforcement.
But our case study wasn’t just a tweet from anyone, it was from the President of the United States.
And that’s where Twitter’s “newsworthy” loophole comes in.
In Twitter’s Rules and Policies, the site says that in the case of abusive behaviors, they consider whether a message is “newsworthy and in the legitimate public interest” before they decide whether to delete it or not.
If the message is newsworthy, or if Twitter believes it provides an important public record of an event or statement, then it can remain — even if it violates other terms and conditions. This is, at first blush, reasonable. But there are some glaring problems with the metric of newsworthiness.
The first problem is, what is “news,” and what makes something worthy of being it?
Personally, I’m a reporter and semi-public figure in a small town, and if I were to say something hateful, it would be newsworthy and shocking to my 270 twitter followers. However, on a national scale, my tweets are completely irrelevant — to my own dismay.
If I suddenly founded a local chapter of the KKK, it would be newsworthy to my town, but Twitter doesn’t know that nor does it care.
Another problem is that the newsworthy standard gives more power to hegemonic forces. It creates a “power begets privilege” scenario because powerful people are more likely to generate news and therefore have more leeway in what they say.
Social media is a platform that is loved for being even-grounded and equally-footed to people of all social stature — at least to some degree. But by giving more freedom to people who are “newsworthy,” the parity is broken.
Another, more obvious problem is that all of these decisions of newsworthiness must be made by Twitter, and users have to trust that Twitter will be able to intercede itself in a conversation as an unbiased mediator to decide whether what someone has to say is valuable or not.
Can Twitter be trusted with this? What recourse is available if they can’t be? Where does the buck stop in an appeal process? Will a person with more influence be more likely to get their appeal heard than a person of less influence?
Who is watching the watchers?
And what about tweets that share hateful information but aren’t necessarily hateful themselves? For instance, would Twitter permit the sharing of Nazi propaganda if the sharer were providing it for academic purposes? What if the sharer is known to be a far-right academic (yes, they exist) and it was unclear whether the sharer was endorsing the views or not?
There are a million complexities to the issue of what is newsworthy and in the public interest, and a perfect answer is impossible — which is probably why Twitter’s CEO, Jack Dorsey, has been vague on the matter.
Dorsey hasn’t provided specifics into how he would treat hate speech from important individuals. In an interview with HuffPo’s Ashley Feinberg, the following discussion took place.
Ashley Feinberg: So then is there anything that, say, Donald Trump could do that would qualify as a misuse? Because I know the newsworthy aspect of it outweighs a lot of that. But is there anything that he could do that would qualify as misusing the platform, regardless of newsworthiness?
Jack Dorsey: Yeah, I mean, we’ve talked about this a lot, so I’m not going to rehash it. We believe it’s important that the world sees how global leaders think and how they act. And we think the conversation that ensues around that is critical.
Ashley Feinberg: OK, but if Trump tweeted out asking each of his followers to murder one journalist, would you remove him?
Jack Dorsey: That would be a violent threat. We’d definitely … You know we’re in constant communication with all governments around the world. So we’d certainly talk about it.
Feinberg: OK, but if he did that, would that be grounds to —
Jack Dorsey: I’m not going to talk about particulars. We’ve established protocol, it’s transparent. It’s out there for everyone to read. We have, independent of the U.S. president, we have conversations with all governments. It’s not just limited to this one.
Dorsey seems to be saying he’d take each high profile issue on a case by case basis, which means the decision-making process will likely be behind metaphorically closed doors.
Now, forgive me, as I take a brief detour to discuss another flaw in Twitter’s plan in regards to newsworthiness: stochastic terrorism.
Stochastic terrorism: The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random (Wikipedia)
The actual act of stochastic terrorism is implicit, and may not even be punishable, but it can be deadly.
Consider, for example, the following tweet by the President depicting a younger version of himself pummeling a satirical personification CNN.
By combining this tweet with other anti-media statements from Trump, a textbook picture of Trump as a stochastic terrorist develops, and it came to fruition when pipe bombs were sent to CNN and other Trump critics.
Suspect arrested after explosive devices sent to Trump critics and CNN: Live updates
What you need to know What you need to know The suspect: Authorities arrested 56-year-old Cesar Sayoc in Florida in…
While stochastic terrorism doesn’t have to be committed in a newsworthy fashion, it certainly makes it easier to reach the extremists. By not removing tweets based on their newsworthiness, Twitter remains a vessel for stochastic terrorism.
The complexity of the problem with moderating such a massive online space is enormous, and by adding a “newsworthiness” clause, Twitter opens themselves up to a wide array of subjectivity and moral ambiguity.
Fortunately, I have developed two humble suggestions for Twitter to stick more truly to their own stated goals.
Twitter’s stated goals in regards to “newsworthiness” are: preserving the site’s status as a place for news to break, and maintaining a public record of statements and events.
For the purposes of my solutions, I’ll be making my proposals assuming that these are good goals, though as you’ve seen, whether they are or not is in question.
When the President tweets something, it is undeniably newsworthy and an important piece of public record. While there are other ways the President could share the information they wish to share, the fact is, Twitter is an easy place for them to do so and it is an easy place for the media and the public to access it.
But while the President’s tweet is newsworthy, the fact that “MAGA_lover361” or any other of the President’s supporters retweeted it is not. I argue that, when a tweet by the President is considered both newsworthy and abusive or threatening, the tweet shouldn’t be deleted, but it should be impossible to retweet or favorite.
The exception to this rule is when the retweet (or response, or quoted tweet, or favorite) itself is newsworthy, such as when the Speaker of the House responds to the President via Twitter.
Now, to maintain a public record, Twitter must ensure that tweets remain visible. Twitter users currently have the ability to delete their own tweets, thereby removing them from the public’s eye. This doesn’t serve Twitter’s goals.
Therefore, I suggest that twitter users who regularly share newsworthy content should be unable to delete tweets over 12 hours old. This solution allows users adequate time to take something back for misspelling or small factual inaccuracies, but it doesn’t permit them to substantially “rewrite history.”
But who are these users who are likely to share newsworthy material?
First, they must be verified. This in itself drastically limits the amount of people who are newsworthy. But if Twitter wanted to go a step further, they could add an “influencer” badge to an account.
The “influencer” badge would allow the account to tweet things that violate Twitter’s terms and conditions and would allow the user to interact with other such tweets. But it would also make the account unable to delete tweets over 12 hours old and unable to gain retweets or likes on abusive content they put out.
This wouldn’t stop the abusive content from making it into the public’s eye. It would still widely be shared through the news (if it is truly newsworthy) and in meme form on other social media. However, it would drastically decrease how many people would see the abusive content and would therefore disincentivize users from sharing the content.
These solutions aren’t perfect. Not even close. They don’t address screenshotting or how to handle shared links — or what happens a twitter “influencer” user deletes their entire account. These are rough drafts of a solution at best.
Also, my solutions make Twitter’s playing field even more skewed by dividing all users into “newsworthy (influencers)” and “not newsworthy (me),” though by doing so, these solutions actually restrict the freedom of influencers by imposing more restrictions on their conduct than currently exist.
My solutions aren’t perfect. But they are a step closer towards Twitter’s stated goals.
Social media will always be a reflection of our society. As long as hate exists in the tangible world, I’ll be writing about hate in the virtual world. No term or condition, and no account verification will be able to wipe out hate from society.
But we can help to stop its spread — and maybe make Twitter a happier place while we’re at it.
Ben Chapman is a reporter and commentator in Illinois. He is a student in Food Science and Human Nutrition and ran for his local County Board in 2018.