Little Progress Evident In Facebook’s Fight Against Terrorism

In April 2018, Facebook CEO Mark Zuckerberg laid bare Congress’s total unpreparedness to deal with the challenges of social media. We wrote on the grave consequences for national security, showing how Congress’s shortcomings on social media were part of broader deficiencies in understanding the math underpinning today’s technological revolution. We highlighted a statistical sleight of hand Zuckerberg employed multiple times, stating, “Today, as we sit here, 99 percent of the ISIS and Al Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it,” and in doing so providing no insight into what portion of Daesh and Al-Qaeda content on their product Facebook removes. Though Zuckerberg referenced Facebook’s efforts to remove terrorist content as a whole 11 times across his testimony, he made no mention of any other terrorist groups, jihadi or otherwise, whose content had been removed.

Facebook has made progress on removing Daesh and Al-Qaeda content, but this represents only a sliver of terrorist propaganda. Estimating the size of the universe of terrorist propaganda on the product is a massive task and beyond the scope of this post. To Facebook’s credit, its API is the most restrictive among the major social media products, which means our tracking and discovery is manual. This has demanded we develop metrics to systematize the process, and these metrics, while not a measure of the extent of jihadi content on Facebook or the portion removed, are strong guides for the difficulty jihadis face when using Facebook to connect with other jihadis.

Sample Turkish jihadi profile with official Daesh video. The video has been shared from this post as far afield as Ghana and Denmark.

Our discovery process involves typing the native language name of a State Department designated Foreign Terrorist Organization (FTO) into the Facebook search bar, going to a resulting fan page or profile, and investigating those who have commented on, reacted to, or shared content. “Account discovered” is achieved when we’ve found a profile active within the past week with a photo of the individual with a weapon who has shared content originating on the official website or Telegram channel of an FTO. “Network discovered” is achieved when an account is discovered where five of the top nine friends shown on the profile’s page meet the criteria of account discovered. We exclusively look at publicly available content.

Sample public friend list of a Bosnian member of a jihadi network. Pictured are profiles from Bosnia, Turkey, Georgia, Russia, and Iraq. Not pictured are accounts from 17 more countries including the United States.

Below is a comparison of these metrics in July 2017 and January 2019:

Ultimately, our metrics suggest that Facebook’s efforts over the past eighteen months have made no noticeable impact on the difficulty with which jihadis can use the product to find the like-minded.

We developed these metrics to improve our workflows, and our own optimizations may account for some of the change or lack thereof. But starting shortly after July 2017, we began shifting our focus away from jihadis to malicious state actors and their proxy groups. With resources magnitudes greater than what any jihadi group could muster, and every incentive to remain clandestine, the challenge of stopping misuse by state actors dwarfs the same by jihadis many times over.

Last week, we wrote about the dangers of AI becoming a solution looking for a problem. For whatever AI can accomplish, a simple search in Arabic for “Jihadi on the path of Allah” reveals a Work Position listed by 1.5k users — easily discovered by a three line query but seemingly missed by AI.

Facebook must develop metrics that provide insight into the goal of stopping misuse. The share of content removed by AI is a metric on saving money by automating away manual review, not a metric on improving a product. Publicizing the removal of five accounts in 2018 from networks that already numbered in the thousands by 2015 is in line with PR goals, not product ones. Multiple studies, and much of the success of the newsfeed, outline methods for measuring content’s spread and effectiveness at changing behavior, and Facebook must develop and publicize changes in those measurements applied to disinformation. Some progress is being made, with Facebook’s removal of a far greater number of Russian bots and trolls recently. But the progress is largely opaque. The challenge of stopping state actor misuse is many times that of stopping jihadi misuse, and the consequences of failure many, many times greater still.