Facebook And Google Move To Block Violent Videos Automatically

Measures to be more aware of extremism and leverage their reach, to quash it.

Google, Facebook and other internet video services have quietly started using automation to remove extremist content from their sites, as told to Reuters by sources familiar with the process.

The move marks a big step forward for internet companies, who are eager to remove violent propaganda from their networks.

Sites are also being pressured by governments around the world, to make amendments and exert effort against violent attacks — similar to the attacks in France, Belgium, and the United States which are becoming more prolific these days.

The sources told Reuters that YouTube and Twitter are also among the sites that are deploying systems to block or quickly delete videos published by groups such as the Islamic State and other groups posting similar material.

The technology used for achieving this was originally developed to identify and remove copyright-protected content. It achieves this by conducting mass-searches for “hashes,” i.e. a type of unique digital fingerprint. which allows all content with matching fingerprints to be identified. Subsequent to such identification the offending material is removed rapidly and automatically.

The system can also detect attempts to repost the content already identified as undesirable!

None of the companies identified as using this technology would confirm it’s use to Reuters, but sources familiar with the process said that posted videos could be checked against a database of banned content to identify new postings of violent nature.

In late April, amid pressure from US President Barack Obama and European leaders concerned about online radicalization, internet companies including Alphabet Inc.’s YouTube, Twitter Inc., Facebook Inc. and Cloud Flare held a call to discuss options, including a content-blocking system put forward by the privately funded Counter Extremism Project.

The discussions were mainly focused on the critical but difficult role some of the world’s most influential companies now play in addressing issues such as terrorism, free speech and the lines between government and corporate authority. Understandably these lines can blur occasionally, and these organisations need to have an established protocol on how to distinguish and act upon incidents correctly.

At this point, none of these companies have used the anti-extremist group’s system, as they have typically been wary of any kind of outside intervention and patrolling on how their sites should be policed.
 “It’s a little bit different than copyright or child pornography, where things are very clearly illegal”, said Seamus Hughes, deputy director of George Washington University’s Program on Extremism. “Extremist content exists on a spectrum”, Hughes said, “and different web companies draw the line in different places”.

Until now, most of these social sites have relied mainly on users to flag the content that violates their terms of service (and many continue to do so, without any more amendments made). Changing this would be one, very complicated to do, and second, disturb their user-friendly platform for their active members.

Coming back to the current protocol — Any ‘flagged’ material is then individually reviewed by human editors who delete postings found to be in violation. The companies that are now using automation are not publicly discussing it, two sources said, in part out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents. “There’s no upside in these companies talking about it”, said Matthew Prince, Chief Executive Of Content Distribution for Cloud Flare.

But done right, a lot of these efforts can actually be effecting in controlling the widespread reach of terrorism all around the world.

Chip-Monks says: This step should have been triggered a long time ago, and it could have possibly prevented many attacks on humankind in various forms.

But, better late than never!!

At this point, we should appreciate the steps taken by the authorities and the Internet platforms; these should help curb social crime by actively flagging and dealing with it, promptly.


Originally published at Chip-Monks.