Is Facebook’s ban of Far-Right extremists and Google’s new Youtube policy different from state-led censorship such as in China?
Yes… and no.
Facebook is a private company; therefore, it can enforce its rules without having to ask anyone. When they do is merely a way to pretend they are listening to users — and a way not to step in too many toes. Very often changes in policies do enrage a few users, but overall Facebook has managed to keep growing and increase its value.
Nevertheless, being a private company doesn’t mean not having to deal with the public or listen to the public (in a more meaningful way than just sporadically ask if everything is ok and asking people to click “accept” after a policy change pretending they do have a saying). Facebook provides a public service and has billions of users (more than the population of any individual country or several combined), yet, it is fairly unaccountable to its actions.
Normally Facebook answers only to investors and companies who flood the network (and its parent social media, Instagram) with ads. And that needs to change. Even though Facebook is indeed a private company, it provides a vital public service and as such, must be accountable to governments and to its users.
Facebook has recently banned far-right extremist from the platform in a decision to tackle fake news and extremism, but if one considers Facebook as a public square for people to engage and debate, the decision seems less ok — in fact, it is quite problematic.
In a world that too often depends on online communication, prevent someone to use it amounts to censorship, to forbid someone from speaking in a public space to anyone willing to hear. I’m not preaching here in defence of Infowars of Milo Yiannopoulos, far from that, but I’m worried that we are giving too much power to a single company to prevent people from exercising their freedom of speech — and it can turn against anyone.
Not that I’m all for unrestricted freedom of speech, much less that I don’t think that people can indeed be banned from social platforms, the issue here is that Facebook is bigger than pretty much every state in the world, it is an important forum for debates and discussion (some would say the most important nowadays), yet the company is completely free to change its rules, ban anyone they want, without any kind of accountability.
In, let’s say, real life, there’s the justice system. There are courts of law, there’s someone to turn to. The judges are real, the lawyers as well. On Facebook, however, decisions are made without any right to contest or protest, by faceless strangers behind a computer and with unclear rules on how to appeal.
A friend recently had his Facebook account deleted for “breaking the rules”. He did not break any rules, but was the target of a haters campaign that finally managed to knock out his account. By appealing to a faceless Facebook employee, his account was finally deleted, his complaints were not accepted and it was unclear why.
And not being clear is part of what makes Facebook work. It’s not Facebook’s interest to leave room for contradiction. Its rules apply to the censor’s pleasure.
We often read about how China is using artificial intelligence to monitor its citizens, but is Facebook that different? Sure, we choose to join the social platform… but do we really? Not joining is like facing some sort of blackout. Communication blackout, social blackout. Is being locked outside a whole world where much of social life takes place.
So, in a sense, we have a choice between social ostracism and joining a social platform that pretty much scans your life for every little detail and shares it with partners too often without any consent (remember Cambridge Analytica?). We are gently forced to join Facebook and then we have no more rights over our data and our online life — it is almost a dictatorial dystopia.
On Facebook, your complaints are rarely heard, the means to complain are tortuous and unclear, as well as not knowing what the decision process is and how it goes. We don’t know who the judges are, how they were chosen and what rules they should follow — or even whether the rules are applied uniformly (hint, they are not). Once accused of some “crime” we have no one to turn to, we are often suspended without the right to appeal, without knowing who complained and why.
The similarity with an authoritarian state is not a coincidence, but it is the standard that governs the world of social networks.
Back to the extremists, my issue here is not that they are being kicked out of Facebook, but that they are being punished based on rules that no one voted, without the right of defending themselves properly. There are no judges, lawyers. And, above all, that what is happening to them might happen to anyone that is not an extremist.
Too often people end up suspended from their accounts for posting something that one of Facebook’s censors think is pornography, offensive, violent — yet videos of people being beheaded usually escape the same censors.
The issue here, is important to highlight, is precisely on the lack of accountability. The rules are not clear, they are not universally effective, there’s no way to effectively complain, it is quasi-dictatorial.
It is virtually impossible not to know someone who got a few days of suspension for being ironic, for criticising something, for using “forbidden words”. There are countless cases of people who got suspended for using expressions that are used to attack the LGBT community, but that have often been appropriated by the community and even affectionately used by its members — but that end up suspended for using such expressions, considered “hate” by Facebook.
I’m not even going to start on the debate of what’s hate speech, because aside from Nazism the debate is endless. But I’d like also to point out to another effect of Facebook’s censorship: Sometimes throwing those unwanted out of public spaces end up leading them to more radicalised positions in spaces with less or no control where hate speech can proliferate without any kind of supervision. We all know of a social platform created by and for the alt-right to allow unfiltered and hateful speech to proliferate freely.
We find ourselves at a real digital crossroads. The existing means to verify and combat hate speech are little or nothing effective, often hitting those who have nothing to do with the conversation, at the same time we are still on the beginning of the debate about what would really be hate speech and we are even worse off in the debate about how to punish such speech, often preferring banishment and censorship to other more constructive forms of conflict resolution.
And Facebook is not alone.
Twitter as well has a policy of banning users, as well as, now, will hide the tweets of politicians who brake their rules.
Recently, Youtube/Google announced they will also delete and ban any content they consider “extremist”. At first glance, not a big deal, who want’s neo-nazi preaching freely available to all or videos against vaccination posing a clear threat to public health? The problem is that Youtube won’t stop there. Videos containing “offences to sexual orientation” will also be banned. What does it mean?
I’m all for trans rights, but comment on or even disagreeing with the participation of trans athletes on female sports is an “offence to sexual orientation”? It depends to whom you ask. And that’s the biggest issue with companies filled with good intentions and ready to censor anything relying on algorithms not capable of understand irony or simple debate.
Will opinion be protected? And how it’ll be protected? Companies already struggle to live up to their own standards, how can they assure users that they won’t just censor in bulk as not to have future problems?
I’ll give you an example: The BDS, or Boycott, Divestment and Sanctions movement against the Israeli state that is too-often called “antisemitic”. It is not, even though some antisemitic individuals might support it, but is easy to just sell the idea that “Israel” is the same as “Jew”, even though there are plenty of Jews that oppose Israel and support the BDS. But what if Facebook subscribes to the idea that BDS is hate speech and starts to censor every critic of the country/regime (as the Israeli government wants)? Who’s behind the decision? Why everyone should be subjected to such decision without having a say?
In the end, it’s just (or would be just) a dictatorial, unilateral decision… like that of authoritarian governments.
This is not about defending Infowars’ supposed right to incite hatred and violence against innocent people, for example, or to allow anti-Semitism to be openly preached, but rather to open a dialogue about more constructive ways of relating socially online. We say and do things online that in the “real world” would be unthinkable, perhaps we are in a moment of transition, getting used to new technologies, but the fact is that often large monopolistic networks, such as Facebook, act as dictatorial states denying rights to their “citizens. And we must not just accept it.
Infowars can be banned. Milo can be banned. Paul Joseph Watson can be banned. I can even say that they should (be banned). But the issue here is that we need clear policies, we need to know the rules, the judges and, above all, be able to take part in decisions that affect our (online) lives. The problem, however, is the lack of transparency and accountability. Someone with power on Facebook’s structure wakes up in a bad mood and decides that he will expel people who defend something he doesn’t like… and there’s nothing we can do about it.
I must also mention that such initiatives help to increase the victimhood of broad sectors of the right, which feel directly attacked even when they do not necessarily defend, for example, racist ideals. Openly progressive companies defining for themselves what would be “hate speech” end up creating an unusual situation. The “borderline content” category is also confusing. Youtube refuses to explain what would fall into such a category, but the truth is that censorship of content that does not effectively violate the rules, but that could violate, is at best complicated.
The lack of transparency is striking.
Recent research by Harvard’s Berkman Klein Center for Internet and Society pointed to a fundamental flaw in the Youtube algorithm (or, in fact, its perfect functioning) that had helped to promote a paedophile network in Brazil. Youtube’s automated system keeps videos connected based on what it learns from the tastes and clicks of users (and also by a progression of recommendations), that is, paedophiles end up creating tracks and a network of recommendations that end up being reproduced to the entire audience of the site.
Individually the videos are harmless, parents recording and publishing online the lives of underage children, but together they form a scary network of paedophilia. The solution would be to ban the channels, censor the videos or understand that the problem often lies in the algorithm?
The case is interesting to analyse also how they spread videos of conspiratorial theorists and hate speech that Youtube now says will fight. The algorithms are programmed to promote such content.
Many left-wing acquaintances in Brazil have been celebrating Facebook and Google’s decision, but will be happy when they became the targets for arguing, for example, that Stalin has never killed anyone (or hasn’t killed enough), or for spreading fake news (like the famous one about former President Dilma Rousseff having suffered a coup at the behest of the CIA)? Because if Facebook and Google’s rules against fake news and hate speech are applied uniformly, no one will be safe.
And there’s no one to complain to.
This article is an updated and longer version of “Censorship on Facebook and YouTube”, published at Areo Magazine.