Tech Influence in Freedoms of Expression: Social Media & Radicalization

Today’s news headline: “Facebook, Twitter, Google and Microsoft Team Up to Tackle Extremist Content”, wherein:

“The companies are to create a shared database of unique digital fingerprints — known as “hashes” — for images and videos that promote terrorism… when one company identifies and removes such a piece of content, the others will be able to use the hash to identify and remove the same piece of content from their own network.”

Some concerns and solutions, as detailed in today’s IGF workshop, “Social Media and Youth Radicalization in the Digital Age”:

DEFINITIONS OF “RADICALIZATION”.

Barbora Bukovska, Senior Director of Law and Policy at ARTICLE 19 — which promotes freedom of expression around the world — concurs with UNESCO’s Knowledge to Foster Incusive Knowledge Societies report, that “radicalization” and “extremism” have been poorly defined, thus making them easily subjected to human rights abuses. (She notes that Guy Berger, UNESCO Director for Freedom of Expression and Media Development, had once been considered “radical” because he was an advocate for racial equality during South Africa’s apartheid era.)

Part of this is reason, according to UNESCO, is because there is very little empirical research on the topic, and almost no studies which have been done academically due to the contemporary and quickly-evolving nature of the topic.

Poorly defined terms allow governmental bodies to not only suppress violent groups but also civil rights advocates, NGOs, journalists, social movements, and religious institutions which they may arbitrarily deem subversive. In turn, these oppressive rules gradually erode civil society and citizens rights (a topic I touched upon in-depth yesterday).

With the lack of a clear definition for “radicalization” in the government, media, and the public eye, groups like ISIS, Al-Qaida, and sometimes even the Kurds, are merged into the same large whole despite their differences in approach and philosophy. Grouping them together artificially inflates the scope of each organization’s activities and prevents truly useful solutions from being created.

CENSORSHIP.

In 2015, both Twitter and Facebook attempted to combat ISIS by deactivating thousands and thousands of accounts, and they have been largely successful in ejecting the group’s propaganda from the platform. In that same year, the encrypted communications service, Telegram, also blocked 78 ISIS-related channels, which the Middle East Media Research Institute says was used by the group to not only do “mere re-posting of jihadi groups’ propaganda,” but also “includes tutorials on manufacturing weapons and launching cyberattacks, calls for targeted killing and lone-wolf attacks, and more.”

Not all companies have been as successful, and the options through which the organization can communicate are plentiful. ISIS still uses Telegram, as well as similar encrypted chat programs such as Kiki, and these communications serve as a private venue for the groups to recruit new converts.

Twitter’s removals of centralized accounts may, on the surface, lead to a decline of support for ISIS — but in the grand scheme, this technique falls short of being effective, according to a Rand Corporation report which examined ISIS Twitter networks for ten months. Without even taking into account the larger fact that counterspeech and counterviolence campaigns have fallen short, the reasons for Twitter’s failure are plentiful:

  • Due to the jihadist group’s decentralized networks, the removal of one or two “official” disseminators is insufficient; others still remain to spread that information further. Proliferation is also quick; ISIS supporters are simply more social media-savvy than their opponents. One example is that they routinely outtweet opponents, producing 50 percent more tweets per day, according to Rand Corporation’s Examining ISIS Support and Opposition Networks on Twitter report.
  • Twitter allows for targeted messaging, which is harder to police.
  • Twitter does not limit or censor their hashtag search functionalities, which are now one of the group’s preferred vehicles for dissemination of their propaganda. Documenting the Virtual ‘Caliphate’, a Quilliam Foundation report, explains, “By using these tagsin combination with a set of other ‘key’ hashtags to refine the search, as Islamic State supporters do, it is possible to circumnavigate the Twitter bots — computer programs set up to create automated posts — and other sources of noise that flood jihadist channels seeking to disrupt easy access to propaganda.”
  • According to Phillip Louhaus, a national security expert, Twitter has a number of sympathetic actors, who are able to serve as intermediaries which guide others to jihadist groups, without going as far as directly inciting violence or committing offenses which Twitter would suspend their accounts for.

Additionally, while these blocking and censorship techniques used by tech companies are generally undertaken with “good intent” and can sometimes be successful, they also have the potential for creating unintentional “collateral damage”. Facebook’s blanket censorship sweeps, for example, led to a number of women named Isis getting their accounts deactivated.

According to Rebecca MacKinnon of Ranking Digital Rights, an organization which evaluates internet companies on their commitments, policies, and practices to freedom of expression, the inevitable “collateral damage” also lacks adequate remedies. Those who have their accounts deactivated generally have problems getting their content reinstated, unless they have friends who work with digital freedom or civil rights organizations that are able to contact companies directly.

Companies like Google state publicly that they don’t believe digital censorship is a good practice, and UNESCO asserts that there is very little empirical research and no clear causal connection between online communications and radicalization. Yet despite the rhetoric, the current actions towards “hashing” are clearly one type of censorship, deemed “acceptable” in very specific circumstances.

TYPES OF RESPONSES.

UNESCO identifies the responses to radicalization as:

  1. Blocking or removing;
  2. Promoting media literacy that creates intercultural dialogue;
  3. Engaging in counterspeech, or calling out those who are engaging in dangerous speech, to provide support to targeted communities, and to shift norms through communication;
  4. Strengthening the role of journalism.

Counterspeech has become the most popular countermeasure adopted. Companies like Google use it to hyper-promote truth and understanding — by favoring positive search results, for example — with the hope of drowning out harmful content. They are also creating initiatives such as Creatives for Change, which gives money to YouTube creators to make more content that counters hate.

TRANSPARENCY & ACCOUNTABILITY.

Companies — specifically Google — have been somewhat transparent about their interactions with the government, but MacKinnon stresses that very little data is still being recorded about what gets taken down and when. She says it is also unclear whether or not these companies are doing human rights impact research around the creation and enforcement of their Terms of Service.

Many of these companies are probably pressed by governmental bodies towards involvement in experiments like “hashing” . This blurs the lines of what is or isn’t acceptable censorship, and raises questions about who will ultimately make those decisions.

In a November 2015 interview with Wired, Jillian York, the Electronic Frontier Foundation’s Director for International Freedom of Expression, says, “While it’s true that companies legally can restrict speech as they see fit, it doesn’t mean that it’s good for society to have the companies that host most of our everyday speech taking on that kind of power.”

People should be cautious, says Hany Farid, a computer scientist who is working on some of the technology which will be employed in this “hashing” venture: “There needs to be complete transparency over how material makes it into this hashing database and you want people who have expertise in extremist content making sure it’s up to date. Otherwise you are relying solely on the individual technology companies to do that.”

(Notes from Internet Governance Forum, 2016 December 6)

Ω