Why Twitter (And Others) Can’t Be Saved

Dylan Greene
Apr 19, 2019 · 5 min read
Photo by geralt at Pixabay

One of the things I think about a lot in our current age of digital discourse is the means of which that discourse is shared. As news has slowly painted a picture of dysfunction about social media and the behavior it encourages, I’ve come to an understanding about the platforms we take for granted:

Most of them are built to encourage bad behavior.

There is some discussion of this, but it’s still quite commonplace to assume that if the right users came along, things would be better.

Not the Right People, Definitely the Wrong Place

The idea undergirding this belief is that social media is a tool and that as a tool it can be used for all sorts of behavior.

But, just as one would normally not think to use a wrench to do the job of a screwdriver, these tools are built with a specific function and through their interface determine who will be using it.

Many of these platforms started off as personal projects that managed to catch on, and while that contributes to the narrative of the underdog beating the odds, the reality is that this has resulted in the creation of platforms that fundamentally fail to understand how human beings work.

Reddit was built with the assumption that the upvote/downvote system would lead to vibrant discussion. Twitter assumes that the ease of access will connect people (though to what or whom is conspicuously absent).

Any sort of press releases or interviews will obfuscate the nature of the systems that these platforms are built upon, this can also be seen in the opacity of interacting with the platform support. In actuality, it’s pretty shocking how little these people know about their own platforms, at least that’s how it appears.

For example, Reddit will always attract users who use the upvote and downvote system as a weapon and brigade other subreddits because those are core functions of Reddit. Twitter will always attract users who will send quick, snarky takedowns to the targets of their abuse because that’s the kind of interactions the platform encourages.

Can you find something good on Reddit, on Twitter, on other toxic platforms? Of course, but they are working against the system and not with it. They exist in spite of the platform’s structure, not because of it. Often, this means that these people are very dedicated or content with being a smaller voice on the platform.

Dead Blue Bird

Twitter is the ur-example of the design failures that lead to toxic user experiences. By adhering to tech-bro libertarian ideology that permeates Silicon Valley, the platform has produced a hostile user climate which is damaging for both discourse and the user base as a whole.

It can’t be saved.

I did not come to this conclusion, cynical as it sounds, from my sentiments. If anything, I asked in what ways could one fix the problems with the platform. What I found were several practical avenues:

  • Get rid of dot replies. This is one way that abusers target multiple people.
  • Hide followers by default. An abuser should not be able to have access to a list of followers.
  • Require a “drawbridge”, where both parties must consent before interaction occurs.
  • Require a default wait time of several days before being able to talk normally. Until then, a moderator must approve everything the account sends out. This is to discourage sock puppets and spammers.
  • Ban neo-Nazis, fascists, and serial harassers.

There are likely other actions that could be taken, but let’s assume that these fairly basic steps are taken. What would be the result?

A significant amount of traffic, engagement, and user interaction would be lost, which means lost revenue. There would be massive amounts of harassment levied at Twitter support, hurling freeze peach arguments at the unfortunate soul tasked with pulling that trigger. “Conservative voices are being silenced!”

It would almost immediately kill Twitter. These very steps build so far away from Twitter’s core design philosophy that one’s efforts are better spent on creating a new platform.

The Wrong Lessons of Yik Yak and Imzy

There are two platforms that currently come to mind in regards to attempts to resolve the toxicity of platforms: Yik Yak and Imzy. Both of which had failed, and which could feasibly offer lessons about how to proceed.

The first of which is Yik Yak. Originally a geolocation-based social network that was known for its applications on college campuses, it met its end after a short bout of popularity. It became with complete anonymity and after some controversy. A common narrative is that when the platform started to require a username led to its downfall.

This is wrong. Plenty of other platforms require a username and function just fine. The real problem that Yik Yak had was in its core design philosophy. Mainly, it went against the utility of firing shots at other students and professors. Harassers who used the platform knew exactly what they were doing, and the platform drew exactly those kinds of people.

Where the problem really started was in including total anonymity at the start, because the toxic audience that first came to it became its core. This was a black hole that they could not escape from, and one that Twitter cannot escape from.

Imzy, likewise, was built to be a kinder Reddit. For what it’s worth, it seemed to deliver on that promise. The discussions I had were genuinely better than other platforms. Imzy’s approach was not wrong, but it ran into one major snag.

Due to being “unable to find a place in the market”, the platform was shut down before it could grow further into something self-sustainable. The network effect means that entrenched platforms are always going to have an advantage over new contenders because social media platforms definitionally require many users and subsequent discussions

But even platforms that are incredibly popular still aren’t nearly as profitable as one would think. Reddit is still not profitable and keeps going due to Silicon Valley investments. YouTube’s figures are notoriously well-kept and there’s a level of opacity. Part of the problem is that YouTube is trying to position itself as a TV competitor. The sheer volume and scale of managing all that content requires a serious amount of funding, which is why most YouTube competitors fail.

Humane Social Media

The difficulty with producing humane social media is that it has to take the potentiality of abuse into account. Not everyone has good intentions and if you build a platform that rewards and encourages bad behavior, you’ll see it overtake your platform.

This is also not going to work as a capitalist platform because the incentive structures of capitalist social media emphasize “engagement” and will bias their moderation decisions based upon who has a larger audience, as can be seen by how Milo Yiannopoulos faced no consequences until he started going after Leslie Jones.

There isn’t much money to be had to in polite conversation. It has enormous social value, but translating that into economic value, as Imzy tried, is difficult to do. But we’re going to need to start asking questions about what kind of platforms will produce consistently good discussion, and if our current platforms are doing that.

I believe that better conversation, better dialogue, better thoughts are all possible. I do not think they are possible under our current platforms and the capitalist structures that underpin them. It’s time to start thinking beyond them.

More From Medium

More from Dylan Greene

Also tagged Internet

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade