There’s no place on the social networks for the anti-social

Enrique Dans
Enrique Dans

--

Twitter says it is implementing changes to its algorithm to limit the visibility of offensive messages and so-called troll accounts using thousands of behavioral signals to establish which messages or accounts deserve exclusive treatment and then confine them to limited access areas within the social network. For its part, Facebook claims to have eliminated 583 million false accounts and 865 million mostly spam updates in the first three months of 2018.

Car insurers say up to 90% of claims they receive are fraudulent and are generated by just 3% of drivers. All industries are subject to abusive behavior, impinging on those of us who play by the rules. The social networks are no different: a small percentage of people abuse the system by using it for purposes that if left unchecked, would make them unsustainable. Spammers, trolls and others who indulge in abusive behavior create all kinds of problems, and the social networks’ concerns to protect freedom of expression means this type of behavior has never been properly discouraged. That said, stamping out abuse is not easy: people whose accounts are closed usually just open another and get back to business.

Twitter may have the best intentions, but the measures provide only a partial solution. Instead, people whose only concern is to insult, spam or harass others should be excommunicated and the means found to prevent them from opening an account again — as we all know, that’s nearly impossible on the web, but at least it can be made difficult, at least discouraging them from trying to game the system.

Summary treatment is the only real way to discourage certain behaviors that, let’s face it, are antisocial and therefore have no place on a social network. Every day I tell Twitter about specific accounts that are clearly violating its terms of service one of which is spamming, but the company does ABSOLUTELY NOTHING about it. Instead, it allows me to block them, which is simply a sticking plaster that does nothing to solve the problem. Over the years, Twitter has responded to these kind of issues with half measures and good intentions… and as we all know, the road to hell is paved with good intentions.

One of the tasks involved in managing a social network is implementing its terms of service. What is the point of including in those terms of service that spammers and those who use it to harass or insult others will be expelled, if Twitter then does nothing? Doesn’t the company realize the damage this does to its credibility? The social networks have a very simple choice: user experience and their value proposition would be vastly improved by excluding the 3% of users who engage in abusive and antisocial practices, who use them to spam, opens multiple accounts or insult and harass others. Closing accounts and making it impossible to reopen an account from device’s that use digital fingerprinting or other means of analysis would send a message, loud and clear. Instead, driven by the desire for growth and activity metrics, social networks delay or ignore such policies, reducing the value proposition of the network for the remaining 97%.

Is it really so difficult to apply the rules? I have long believed that the vast majority of problems associated with the social networks would be solved simply by applying their terms of use and taking a firm hand with the tiny minority of delinquents. In the real world, the rules that govern peaceful coexistence are applied immediately and are accepted by all, so isn’t it about time we started doing so online?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)