Extremists may have infiltrated social media companies like Facebook and youtube, taking moderation roles to approve white nationalist extremist agendas.

Facebook, the beloved company behind the Oculus Rift, is now under global suspicion for possible connections to extremist groups given their slow take down of the live streamed terror attack at the Christchurch mosque in combination with the fact that prior to this event Facebook had allowed white nationalist extremist content on its platform.

Photo from the Calgary Sun after the New Zealand terror attacks

The Christchurch mosque terror attack was live-streamed on Facebook. 50 people were killed in the New Zealand terror attack and the video managed to persist on Facebook and Youtube for more than twenty four hours

CBS News reported that
“As New Zealand reels from a terrorist attack against two mosques in Christchurch, Facebook announced it deleted 1.5 million videos of the shootings in the first 24 hours following the massacre. The tech company said in a tweet late Saturday that it prevented 1.2 million videos from being uploaded to its platform, which has more than 2.2 billion global users.

“However, it implies 300,000 versions of the video were available to watch for at least short periods of time before Facebook nixed them. It also reveals how quickly such provocative and graphic content circulate online and the challenges facing social media companies such as Facebook have as they try to stamp them out.”

On March 30th 2019 There was an article in the Sun that said “Executives at social media giants, such as Facebook, could be jailed if firms fail to quickly take down violent and terror-related content”

Shortly there-after on April 2nd 2018 there was a post in the Huffpost showing white nationalist extremists were still on Facebook making the same kinds of arguments as those that appeared in the manifesto by the terrorist in the Christchurch Mosque incident.

Huffpost said: ““They don’t need the new policy to kick off Faith Goldy ― she’s been posting this content for years, and it broke the old policy, too,” said Evan Balgord, executive director of the Canadian Anti-Hate Network.”

Only days after the terrorist attacks were live streamed on Facebook and youtube did Facebook change its policy against allowing white nationalist extremism on its platform.

The Facebook managers who played a role in allowing white nationalist extremist content to exist on the Facebook platform up until April 2019 are under global suspicion for having potential ties to extremist groups. This suspicion exists because Facebook had previously allowed white nationalists extremist content on its platform. Who are these moderators, their managers, and potentially even executives who may have deliberately allowed white nationalist extremists to be on the Facebook platform in the first place?

According to The Hill “Facebook is facing escalating scrutiny over white extremism on its platform after a shooter in New Zealand livestreamed himself shooting worshippers at a mosque last month.”

Extremists and their supporters belong in prison, they should not have jobs at social media companies in positions that allow them to support other extremists.