Is Facebook Violating U.S. Counterterrorism Laws?

by Ruthie Blum
May 30, 2018 at 4:30 am

During his Congressional hearings in April, Facebook CEO Mark Zuckerberg was interrogated by members of the House and Senate. Because it was evident from their questions that many of the lawmakers were actually clueless about how Facebook works, much time was wasted on Zuckerberg’s having to explain the basics of its tools and business model. A select few — among them Senator Ted Cruz — challenged Zuckerberg about the political slant of his platform, which has led to discrimination against conservative groups and individuals.

Zuckerberg acknowledged that:

“…Facebook and the tech industry are located in Silicon Valley, which is an extremely left-leaning place. And this is actually a concern that I have and that I try to root out in the company is making sure that we don’t have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about.”

When asked about what Facebook is doing to prevent terrorists from using the platform to recruit and coordinate, Zuckerberg said that 200 of his (25,000) employees monitor such content and activity in 30 languages.

The question that Zuckerberg should have been asked is why organizations and individuals that are designated by the State Department as terrorists are able to open pages on his platform in the first place, let alone continue to maintain those pages, or block their content temporarily, before allowing it to be re-posted. It would have been a particularly relevant query, given the launch in July 2017 of the Global Internet Forum to Counter Terrorism, announced by Facebook, Microsoft, Twitter and YouTube. The stated goal of the Forum was to:

“help us continue to make our hosted consumer services hostile to terrorists and violent extremists.
“The spread of terrorism and violent extremism is a pressing global problem and a critical challenge for us all. We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services. We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content online.”

The timing of the above joint social-media endeavor is noteworthy. Four months earlier, in mid-March of 2017, major companies began withdrawing or reducing advertising from Google Inc., the owner of YouTube, for allowing their brand names to pop up alongside videos promoting jihad. According to a Middle East Research Media Institute (MEMRI) report released in June 2017 — one month ahead of the launch of the Global Internet Forum — AT&T, Verizon, Johnson & Johnson, Enterprise Holdings and GSK had pulled their ads from Google for its failure to remove jihadi content that MEMRI volunteered to assist in flagging.

Nearly a year has passed since the establishment of the Forum, but groups such as Hezbollah and Hamas — among the 64 organizations currently designated by the State Department as Foreign Terrorist Organizations (FTOs) — still have Facebook pages, Twitter accounts and YouTube videos. Furthermore, to this day, some of the jihadist content flagged by MEMRI in 2015 remains online. One example is a clip entitled: “Shuhada (Martyrs) Of Islam, Look They Are Smiling In Death,” originally posted in 2009.

The State Department’s Bureau of Counterterrorism defines FTO’s as “foreign organizations that are designated by the Secretary of State in accordance with section 219 of the Immigration and Nationality Act (INA), as amended,” asserting that “FTO designations play a critical role in our fight against terrorism and are an effective means of curtailing support for terrorist activities and pressuring groups to get out of the terrorism business.” In addition, “Under Executive Order 13224 a wider range of entities, including terrorist groups, individuals acting as part of a terrorist organization, and other entities such as financiers and front companies, can be designated as Specially Designated Global Terrorists (SDGTs)…”

According to the Counterterrorism Bureau:

“It is unlawful for a person in the United States or subject to the jurisdiction of the United States to knowingly provide ‘material support or resources’ to a designated FTO. (The term ‘material support or resources’ is defined… as ‘any property, tangible or intangible, or service, including currency or monetary instruments or financial securities, financial services, lodging, training, expert advice or assistance, safehouses, false documentation or identification, communications S, facilities, weapons, lethal substances, explosives, personnel)… and transportation, except medicine or religious materials.’…”

The more important question, then, is whether Facebook, Twitter and other social media platforms are providing “material support or resources” — in the form of a “tangible or intangible” property or service — to FTOs and SDGTs. The Counter Extremism Project (CEP) appears to think that the answer is yes. In a recent interview with the Telegraph, the authors of a CEP report scheduled to be released at the end of May pointed to Facebook’s “suggested friend” feature as a main culprit. “Facebook,” one explained, “in their desire to connect as many people as possible have inadvertently created a system which helps connect extremists and terrorists.”

In the immediate aftermath of the Facebook hearings, CEP Executive Director David Ibsen responded to Zuckerberg’s claims that 99% of terrorist content is removed from the site by stating that even if this is true, “given the volume of content uploaded to Facebook by the platform’s estimated 2.2. billion active users on a daily basis, the one percent of terrorist content that is not removed is a significant amount that needs to be addressed. CEP finds extremist content on Facebook on a regular basis, which shows that Facebook and the entire tech sector have much more work to do to stop its proliferation on their platforms.”

Such comments fall short of accusing Facebook and the other social media platforms of engaging in the crime of violating U.S. counterterrorism laws. But the revival of a failed civil suit against Facebook on behalf victims of Palestinian terrorism does not.

The original $1 billion lawsuit — filed by civil rights attorney Robert J. Tolchin and Nitsana Darshan-Leitner, the founder and head of the Israel Law Center-Shurat HaDin in July 2016 (along with a separate suit filed in 2015) — alleged that Hamas used Facebook “promote and carry out…terrorist activities,” including the fatal stabbing of U.S. Army veteran Taylor Force and others. The new suit, filed on the heels of Zuckerberg’s Congressional hearings, states: “Zuckerberg’s testimony makes it clear that Facebook employs its own subjective assessment when deciding whether to censor content. It appears that Facebook’s actual policy is to sometimes censor terrorist content and sometimes not, for reasons now known only to Facebook.”

In a statement to Courthouse News, Darshan-Leitner wrote:

“It’s now obvious that the social media giant has long been deeply involved in editing and manipulating the content on its platform, and has had the technology to block the incitement to terrorism as the plaintiffs in our cases contend. A really moral company would be doing more than mouthing empty ‘my bads’ and, instead, be reaching out to compensate these families.”

In his ruling in favor Facebook’s motion to dismiss the previous suit, United State District Judge Nicholas G. Garaufis cited the Communications Decency Act of 1996, whose section on “Protection for private blocking and screening of offensive material” states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In other words, Facebook is not considered a “publisher,” and therefore is not liable for the content it services — no matter how treacherous. The law clearly needs to be amended to hold social-media platforms just as accountable as book or newspaper publishers.

Ironically, it is the same section of the Communications Decency Act that specifies the following two aspects of U.S. policy: “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material; and “to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.” Why should “trafficking” in terrorism not apply here, as well?

Still, it is the Antiterrorism Act that Tolchin is invoking in the renewed suit against Facebook. If this time the court rules in favor of the plaintiffs, Facebook will be forced to pay compensatory damages. This would be an important step for the social media giant to fear future financial repercussions over its lack of vigilance against terrorists. Whether criminal charges for providing “material support or resources” to groups acting and spurring others to act on their calls for mass murder remains to be seen.

Ruthie Blum is the author of “To Hell in a Handbasket: Carter, Obama, and the ‘Arab Spring.’”

© 2018 Gatestone Institute. All rights reserved. The articles printed here do not necessarily reflect the views of the Editors or of Gatestone Institute. No part of the Gatestone website or any of its contents may be reproduced, copied or modified, without the prior written consent of Gatestone Institute.