Big Tech’s Extremism Problem 2.0

Rioters breaching the US Capitol Building on January 6th.

On January 6th the world watched on in horror as a group of extremists stormed the US Capitol Building, breaching it for the first time in over two centuries.

For weeks beforehand there were signs online of discontent that Trump lost the election (or didn’t — as many of them believe) and of the planning of this event to “Stop the Steal.”

In the face of this obvious threat Big Tech companies and the US government did nothing, despite Floridi’s point about the “substantial erosion of the right to ignore” and “claim ignorance when confronted by easily predictable events” in an information society.

As Floridi further mentions, soon the “distinction between online and offline will become blurred and then disappear.” This wasn’t the first instance of online extremism spilling over into real life, and it won’t be the last.

The large social media companies have willingly housed increasingly extremist talk. Ideas like those of QAnon were born and have since flourished online. The ability to espouse such views on popular sites under the guise of “free speech” normalizes and legitimizes these views to their believers. Today, thousands of Americans believe at least some of the tenets of QAnon, and many of the Capitol rioters could be seen waving ‘Q’ flags.

A QAnon flag at the US Capitol riot.

At some point, tech companies are going to have to take more drastic action than just removing tweets, pages, or accounts spewing hate and violence.

Rhetoric regulation is a tricky topic in a country founded on the principle of free speech, and the constitution prevents the government from getting involved. However, tech companies have no such restrictions and need to strengthen their terms and conditions to unequivocally prohibit potentially dangerous political misinformation. They also need to act quickly to remove offending posts from their sites, not just label them as misinformation.

Tech companies further need to fundamentally rework the way their algorithms serve content to prevent the explosive spread of extremist views. Current algorithms are designed to keep the user’s attention by prompting strong emotions, so someone who interacts with a post with an extreme view will only be served more posts about the same topic. Similar ads algorithms mean that someone in a “Stop the Steal” Facebook group filled with election misinformation may be served an ad for combat gear (1).

Big tech needs to stop tiptoeing around the fact that extremist views are not just harbored on their platforms but are inflamed on them. It’s time for them to finally fundamentally review their purpose and place in society.

1. Mac, Ryan. “Facebook Is Showing Military Gear Ads Next To Insurrection Posts.” BuzzFeed News, 20 Jan. 2021, www.buzzfeednews.com/article/ryanmac/facebook-profits-military-gear-ads-capitol-riot.

--

--