Twitter Tests Tagging Profiles With ‘Potentially Sensitive Content’. But They’re Making The Same Mistake.

d‘wise one
Chip-Monks
Published in
5 min readMar 13, 2017

I’m not going to wax eloquent about the current prevalence of fake news or objectionable content, except to say that it seems to be pervading all forms of media — journalistic and social, and it’s casting an infection shadow on a lot of people’s judgements.

We’ve covered the impact of fake news during the U.S. Elections and more recently told you about Germany’s government taking significant steps to reel in social media giants to clean up their slates — but the fact of the matter is that the everyday social media user (like you and me) seems to be becoming a skeptic. There’s a niggling doubt on what we should believe, and what we should ignore.

In the light of the criticism that they’ve been receiving over the matter, most social media platforms have started to take steps to try to put a lid on the problem.

The latest company to do this is Twitter.

Twitter has introduced a new feature, that they call the “Sensitive Account System”.
Twitter publicly flags some users’ profiles as containing “potentially sensitive images or language”, so as to help warn other users and ward off the aftereffects from the overly gullible.

What this also does is that it creates an intervention page of sorts — instead of this potentially sensitive profile being displayed directly upon the first click, a warning page will is inserted, and the user visiting the profile will have to click an Agree button to view the profile.

This move could be seen in multiple lights. It could be seen as a way to mediate the behaviour of “miscreant” users, so that once their profile has been flagged, they might try to be more careful about the kind of stuff they put out there.
This move could also be seen as a way to wall off these potentially sensitive accounts from the general Twitter populace — sort of like marking off potentially hazardous areas.

There are other, similar steps that Twitter has taken this year — like introducing a function that removes tweets containing potentially sensitive content from search results, and a 12-hours timeout for accounts that Twitter believes to be engaged in abusive behaviour.

Thing is, both these actions are enacted without the users knowing that they’ve been hit with the red card.

On the face of it, this new move of inserting a intervention page, seems to stem from a similar policy — of going behind the “miscreant” user’s back, instead of buttonholing him head-on.

Thus, even though the idea behind what Twitter is trying to do here seems good in theory, the move has been the recipient of criticism. This is so primarily because like with most of Twitter’s anti-harassment measures, there’s a noticeable lack of transparency and a fair amount of obfuscation as to how accounts are deemed sensitive.

It can be quite frustrating for a user to not know that or be notified that his profile has been flagged, so he will most likely find out in a very public manner — by others telling them so.

What’s worse, once Twitter tags or stonewalls you as such, neither is there a process to appeal, if you believe that your profile has been wrongly singled out, nor can you access or view Twitter’s review process that was used to flag your profile in the first place.

What is also quite glaringly unclear is the process that Twitter will use you mark such accounts as “sensitive”. Will it be based on other user’s reports, or some kind of an automated system that Twitter’s put in place or is a team that is going to be working on going through the content and thus deciding what is appropriate and what is not? Nothing’s really know about this machinery.

Thing is — like our website, and every other user-focused platform — the platform becomes the property of the users. Democracy, remember?

The lack of relevant information on the process opens up the possibility that well-meaning and non-abusive Twitter users could have their accounts wrongly flagged as sensitive if enough trolls report them, or if Twitter’s own algorithms mistakenly identity some shared images or videos as inappropriate.

It could then be a situation similar to what Twitter had to face quite recently when they made changes to the features of the public lists.
Twitter had made changes on the notification system of users putting other users on lists, and it was then forced to roll back the change, because it was ending up contributing to bullying, instead of helping combat it! If this new feature and the process behind it are not refined enough, the situation with this one could be much worse.

However, the good news bit is that the feature is still under testing, and has only been rolled out in certain parts yet, and not entirely. A Twitter spokesperson confirmed the new feature, saying “this is something we’re testing as part of our broader efforts to make Twitter safer”.

So if the menace that the feature is causing ends up being greater than the supposed good of it, we can cross our fingers and hope that Twitter does not make the feature a permanent in the packet.

All this being said, I must end with a clear statement of my own view — hate speech or objectifying people or spreading disinformation — all stem from mal intent.
Make no bones about it. Such people should be weeded out and made to stand in the proverbial corner. So each of you — be circumspect in how you express yourself — do so decorously, and politely, and most importantly — speak only when you’re sure of your facts.

Others in your life and those beyond your immediate circle of friends do read what you write and see what you post — so they’re judging you too, and the reputation that you’re making will remain in their, and the world’s internet servers’ minds much longer that you physically do!

So speak up — but politely, intelligently, and gently!

Originally published at Chip-Monks.

--

--