Big Tech Is the New Face of American Imperialism
The reins are completely off, and American legislature seems unwilling to step in.
Major outlets have equipped themselves with a consistent body of reports on the state of online radicalization after terror attacks have been shown to be motivated by the doctrine of the American alt-right. What’s lacking from these reports however, is the sheer global scale at which these ideas are broadcast. The platforms housing these radicals? They’re American, and any hopes of holding them accountable to a damage done worldwide are getting progressively fainter.
The alt-right especially felt right at home on YouTube, and by all measures of engagement, their channels still best even the most popular of BreadTube fixtures. Political YouTube is in a constant state of tug and pull for what side retains the most dominance, and as much as it is a battle of ideas, it is also a battle for the soul of YouTube — a willingness to prove good behavior can rise above the most primitive impulses of hatred and spite.
One such participant in this fight is YouTuber Quetzal. He’s on the side of BreadTube, and he’s taken it upon himself to aid stopping the spread of nefarious extremist ideology in large parts of South America and his home country, Spain. He started out covering video games, but soon enough, he realized he could put forth more of his political side after a video covering Hannah Arendt’s Banality of Evil in the context of gratuitous violence in video games performed well-above his most generous expectations. He then went on to carve out a dedicated audience of the politically-conscious who — like him — were disillusioned with the way social media floated to its top its most-problematic, leaving progressive political currents to drown midst their rapid spread.
Such as the story familiarly goes, Quetzal was not happy about the ideological spot the right-wing was carving out on YouTube. “I know many of my subscribers who had to deal with the hate these people create” he said to me speaking about Agustín Laje — a Ben Shapiro-type political pundit of sorts — and Nicolas Márquez who’ve especially cemented a brand of popularity in a South America whose record of LGBT+ rights and socially-progressive public policy is spotty at best.
“Cultural Marxism” was a favorite of theirs to use as an occult dog whistle for an antiquated Nazi conspiracy theory that seeks to single out the “collective decadence of moral values” within a society as pretext for the erosion of national identity. Quetzal by virtue of being one of the very few to oppose them ideologically, took it upon himself to meticulously break down the term’s problematic roots. The video made use of Nazi imagery for illustrative purposes, and after enduring what only seems to befit the blueprint of a mass-flagging campaign, YouTube took down his video and slapped a strike on his channel for hatespeech. Quetzal tried to dispute, but YouTube wouldn’t budge even after manually reviewing the claim. It wasn’t until his concerns were escalated to traditional media that his calls for a modicum of justice were finally answered.
In talking to Quetzal about his experience with the platform, his suggestions to improve it aren’t all that dissimilar from what advocates for ethical tech are already campaigning for. “If they hired more humans they could curate better.” he told me. He’s also not surprised this does fall in line with YouTube’s bottom-line, saying that “algorithms taking most of the work means less employment and worse service.” And he’s not mistaken. Time and time again, the algorithms designed to keep the internet safe, more often than not, favor extremist content to keep users engaged, all-the-while creators on the opposite side of the political spectrum have to spend their days agonizing over what possible mention of an extremist ideology — outside the context of radicalization — could potentially take down their videos with no immediate sign of reprieve on the horizon.
The problem of online radical speech doesn’t limit itself to well-authored propaganda however. Someplace else where the issue is more systemic, Facebook — a platform that has long prided itself on a tradition of connecting long-lost friends — has now become a vehicle to increase the pace at which harmful stereotypes about entire groups of people are spread, and the rate at which democracies are toppled.
The Rohingya crisis is a perfect example of a story whose oxygen was greatly stripped due to American mainstream media focusing on the ever-so-elusive Russian probe, but its consequences are no less indicative of Big Tech’s grave incompetence. Since the cause and effect remain largely foreign, Facebook feels no sense of true remorse on what beastly machine of bigotry it allowed to cultivate. The rot of corruption rose well through the ranks of the Myanmar Army, with the New York Times’ resident tech columnist Paul Mozur analogizing it to Russian influence in the 2016 American presidential elections:
The campaign in Myanmar looked similar to online influence campaigns from Russia, said Myat Thu, a researcher who studies false news and propaganda on Facebook. One technique involved fake accounts with few followers spewing venomous comments beneath posts and sharing misinformation posted by more popular accounts to help them spread rapidly.
This online-led campaign was rife with radical rhetoric, and it caused the Muslim Rohingya community in Myanmar much-loathsome grief. The army was practically using the platform as a hotbed for recruiting radicals to their cause in one of the very few successful state-sponsored radicalization campaigns online. It was so bad in fact, that the United Nations released a report in August 2018 condemning Myanmar Army generals, and suggesting they should be prosecuted for genocide against the Rohingya. “The social media platform has been called “the de facto internet” in Myanmar, skyrocketing in popularity in recent years, partly because the military eased up on censorship and because of the relative affordability of smartphones.” said Jen Kirby, foreign and national security reporter for Vox. Just months prior to this scathing report, Mark Zuckerberg told Ezra Klein’s Vox that Facebook is hard at work to remedy these issues, but as precedent has it, the measures have been largely ineffectual.
A year on, hatred still followed some of the Rohingya well into their refuge. Thenmozhi Soundararajan, founder of Equality Labs — an organization documenting racially-motivated violence in Southeast Asia — told the New York Times’ Vindu Goel that “[She thinks] Facebook keeps thinking they can solve this within the bunker of their offices and not with the collaboration of the communities who are affected”. She also further noted that despite warning Facebook about signs of anti-Rohingya hatespeech brewing in India, the company did little to stave its growth — an observation which, only cements a recurring pattern in Facebook’s failure to atone for its gravest sins.
And as if the pie wasn’t already shot to smithereens by Facebook’s and Google’s overbearing ineptitude, Twitter has also morphed into its own behemoth of a hate-spreading machine where users are expected to contend with ludicrous suspensions based on faux-slurs like “TERF”, and watch their abusers get away with so much worse.
The platform’s hatespeech policy has been largely shaped by Jack Dorsey’s — very thinly-veiled — pandering to the alt-right. Twitter’s CEO posited the notion that they’re nothing more than an ardent political faction that deserves to be heard as much as its leftward counterweight. That justification of false-equivalency extends only so far however — Jack Dorsey reportedly overruled staff on a decision to kick far-right conspiracy theorist Alex Jones, and famed white supremacist Richard Spencer off the platform. A decision which has since put a dent in public confidence over his ability to lead.
Twitter CEO’s personal positions on the regulation of speech within Twitter are not without consequence. Accounts posting misinformation — which often gets viral — are abundant, and some of them while seemingly small, have great sway over public opinion, acting as a testing ground for extremist ideologues to see just how far is the platform willing to let them go. On Twitter, the relative leeway afforded to problematic creators on YouTube is further extended with hatespeech taking center stage. It was so bad in fact, that it prompted a low turnout of potential investors as Salesforce bailed out on a much-anticipated bid to buy them in late 2016. Hayley Tsukayama, consumer technology reporter for the Washington Post, chalked it off to the platform’s spotty record of addressing hatespeech and harassment, saying that “arguably Twitter’s biggest issue has been its ongoing struggle with harassment, abuse and just general bad behavior on its network”. It’s hard to disagree when the tenure of conversations on platform take often on an incendiary note.
That, like on Facebook and YouTube, has seen its warts burst open on a global scale — and especially where Twitter is more popular. Wherein Facebook and YouTube have broad appeal that is largely agnostic of culture and social status, Twitter seems to lure in entire legions of youngsters. A generation more attuned to the sensibilities of a changing social media landscape has flocked to consume its usual diet of memetic delights, and a not-so-uncommon stream of unduly attacks on the underprivileged. Accounts like the francophone “AuBonTouiteFrançais” constantly spread messages with the explicit intent of undermining France’s Maghrebin minority, as well as its Black constituency. Since much of the African continent is French-speaking due to France’s brutal colonial past, these messages’ deeply disturbing implications rarely go by unnoticed.
To underscore just how much of America’s anti-immigrant rhetoric is repackaged for other contexts where the racial tensions aren’t dissimilar, the account deceitfully misquoted Trump’s call to four congresswomen — Alexandria Ocasio-Cortez, Ilhan Omar, Rashida Tlaib, and Ayanna Pressley — to “go back” whence they came. The account doesn’t limit itself to occasional commentary on the state of American politics. It also indulges in some blatant attacks on minorities, citing — among other things — a common dog whistle in white supremacist circles that immigrants are out to perverse their demographic makeup by raping minors. A low-hanging fruit even for fans of the white genocide conspiracy theory.
Populist politicians have also found a way to use the platform to further their political message. Marine Le Pen — leader of the National Rally party who got surprisingly close to beating Macron based on an anti-immigrant, anti-EU platform — recently tweeted a video message, claiming that Algerians celebrating the feats of their national football team are committing “anti-French acts” and “marking their territory”, citing “failure to integrate” within French values as a reason why acts of civil disobedience have been carried out throughout the country. Nicolas Dupont-Aignan went a step further, and dropped a bombshell of a comment by saying that “if you prefer Algeria, if Algeria is better than France, return to Algeria”. This is not the first time a staunchly-nationalistic stance had been taken against immigrants by the RN’s leader, or Dupont-Aignan for that matter. And yet, even as Twitter claims to have enacted a policy where public officials’ tweets would be hidden, or outright removed if they present a threat to public safety, the content is still readily viewable, despite real potential for racially-motivated altercations to occur as a result.
But the French far-right isn’t alone in exploiting Twitter’s lax moderation to push for radical rhetoric. Public officials from Brazil’s Jair Bolsonaro, India’s Narendra Modi, Iran’s Ali Khamenei, Israel’s Benjamin Netanyahu, accounts of political parties such as neo-Nazi ‘Alternative for Germany’, sockpuppet accounts for terrorist organizations like ISIS, and military bodies like the IDF constantly post what is essentially far-right propaganda on the platform with next no repercussions. The majority of these accounts are verified and are allowed to operate with little interference, and the replies below their tweets are a hotbed for recruiting extremists using language that comprises a lot of racist undertones — something Twitter’s knack for exactitude, does not acknowledge as being part of its hatespeech policy.
This is only but a small sliver of what happens outside the boundaries of Americentric mainstream media, but it’s sufficient evidence that the internet age hasn’t only lain siege to minorities’ rights where coverage of online radicalization’s harmful effects is most prevalent — it also did it where voices decrying it are heard the least. There’s real harm done out there, and with the Trump administration seemingly uninterested in pursuing any useful measures on the spread of hatespeech in America, there’s no telling whether hatespeech directives in countries where the First Amendment is but a historical reference point will be respected.
Whether it’s ideologues spreading propaganda on YouTube, cultural blind spots of American platforms being exploited to advance some seriously-troubling messaging around minorities, or politicians using Twitter’s instant delivery to broadcast harmful rhetoric, there seems to be no running away from online spaces where opportunities of being radicalized through constant exposure of negative messaging against minorities can occur. Even more troubling that these trends span the entire globe — outside of America, in the West, and beyond — and the only body of government in any measure to rectify it, is seemingly unwilling to do much of anything to shift course.
Trump’s White House recently held a social media summit where the attendees list didn’t include any voice of academic authority over social media, but rather spanned the wider gamut of right-wing trolls and the usual suspects crying foul about conservative censorship on social media. The list included an outstanding number of conspiracy theorists and right-wing ideologues, whose accomplishments — or lack thereof — range from covertly aiming to delegitimize Roy Moore’s child predation allegations, concocting another wave of birthist allegations against presidential hopeful Kamala Harris, and making blatantly anti-Semitic cartoons.
Since the current White House seemed to prize the contributions of those who are deemed by many experts to be at the very core problem social media is experiencing at the moment, it’s unlikely that any reform will be introduced to online speech regulation even as it continues to wreak havoc in a fashion whose indifference to national sovereignty is only paralleled by American foreign military might. It is a different type of imperialism that not one party is responsible for, but many actors learn and season themselves to exploit over the downtrodden wherever leadership is oblivious to their needs. The conversation around online radicalization has to encompass the weight of the decisions that American legislature and Big Tech companies have enacted over the last decade to deprioritize and undermine the home-grown terror when spread using tools made by a country whose commander-in-chief’s moral compass is deeply questionable. And unless the pain, and the suffering of everyone — regardless of what tongue they speak, religious affiliation, or cultural affect they exert — are accounted for, the solutions to an ever-worsening situation will only benefit a global few.