Russia Deserves Thanks For All They’ve Done For America
Russian state-sponsored interference in U.S. social policy has done more good than harm

In the lead up to the 2016 presidential election, Russia used social media to divide Americans using troll farms, bots, and imposter accounts. We now know, per a joint statement from the Department of Homeland Security and the Director of National Intelligence (DNI), “only Russia’s senior-most officials could have authorized” interference tactics meant to disrupt the 2016 presidential election.
According to the declassified version of the Intelligence Community Assessment (ICA) from the DNI, Russian President Vladimir Putin ordered an influence campaign in 2016 with several goals “to undermine public faith in the U.S. democratic process.”
The orchestrated strategy, primarily led by the Russian state-sponsored Internet Research Agency led misinformation campaigns, weaponized social media, stoked the flames of racism, and infringed on the privacy of Americans.
Yet Russia’s hacking of the election has done greater good than harm. It got us off our complacent rear ends to demand more transparency and accountability from big tech, jump-started the identification of disingenuous behavior online, and pulled the curtains back on the darkest corners of our social spheres.
Thanks, Russia!
Misinformation unchecked
Baseless lies were allowed to roam rampant under the guise of “equal time” on networks and misinformation online ran largely unchecked in 2016. Few knew that they were interacting with or being influenced by bots — automated machine algorithms posting content meant to simulate humans. It is reported that there were about 3.8M tweets in 2016 from bots alone.
For years, social media platforms had refrained from rigorous content moderation, citing no desire to curtail user content. They merely provide the platform, they said. But after 2016, social media could no longer remain silent or complicit in the matter. “There’s no right to free speech on Twitter,” said Harvard Law School’s Noah Feldman in a Wired piece that questioned whether the platform should be regulated under First Amendment protections.
After an about-face, the clampdowns have been far-reaching — now even for domestic social threats. The social platforms now work to identify “inauthentic behavior” and purge trolls, bots, and misinformation campaigns. During the 2020 presidential election, Facebook took down ten networks, 200 accounts, 55 pages, and 76 Instagram accounts to influence the outcome. Twitter set up “speed bumps,” a type of trigger warning, to alert users about tweets before clicking through to posts.
These new policies meant to curtail deliberate misinformation from spreading virally too fast policies apply to everyone, no matter how prominent the figure. Dr. Scott Atlas, the lead advisor to the White House coronavirus task force, had his Twitter account blocked in October as he violated the platform’s rules on spreading misinformation related to Covid-19 when he claimed that masks don’t work to stop the virus’s spread.
Even President Donald Trump has had tweets fact-checked, now with disclaimers — and he is the leader of the free world. Newsweek reports that twenty-five percent of the President’s tweets have been flagged. As a result, many of his messages include the warning, “This claim about election fraud is disputed,” or similar disclaimers.

Dr. Atlas resigned from the President’s task force (or his contract was not renewed, however you want to spin it) on December 1st. On January 20th, Trump becomes a private citizen, and Twitter has confirmed that he will lose the @POTUS account, and his content will be subject to the same scrutiny and monitoring of all users on the platform to ensure truthfulness.
Change online hasn’t been limited to Twitter. Community monitoring allowed Reddit to squash QAnon by accident in 2018 due mostly to the platform’s rules against online harassment. In October, Facebook also banned QAnon. Facebook now also bans Holocaust denial pages and anti-vaccination advertisements.
Thanks, Russia!
Big tech responsibility
When the covers were pulled back on the Cambridge Analytica scandal, and it was revealed that Facebook users’ data was harvested for the purposes of political targeting during the 2016 presidential campaign, data privacy came to the forefront. Cambridge Analytica used seemingly benign Facebook surveys to surreptitiously collect nearly 87 million users’ personal information and their extended social connections, most without their knowledge. Suddenly, the role of user privacy by big tech and social platforms was under scrutiny.
In 2016, calls for stringent protections of digital privacy came into effect. By May 2018, all organizations were required to comply with GDPR — the “toughest privacy and security law in the world.” Under the GDPR, if a company targets or collects data related to people in the E.U., regardless if that organization resides in the E.U., it is obliged to comply with the data privacy regulations and security standards. Should organizations run afoul of the law, they could be subject to penalties amounting to as much as millions of dollars.
California’s Consumer Privacy Act (CCPA) is a close U.S. equivalent of GDPR, with other states in the U.S. likely to follow. That’s because there is now heightened awareness about how data is collected and its use.
In 2018, Mark Zuckerberg, Facebook founder, faced a grilling before Congress about how much Russians meddled using his platform and whether they censored voices. Around the same time, Facebook changes were announced: advertiser access to data from third-party brokers was revoked. In a move meant to support transparency, users could now download all the data Facebook had collected. At the time, privacy experts agreed with calls for even more regulation.
Flash forward to 2020. The largest tech companies: Apple, Facebook, Google, and Amazon, were called to testify in oversight hearings meant to determine if they have grown too big and powerful. In a statement, David Cicilline (D-RI) said that an investigation had been exploring the “dominance of a small number of digital platforms and the adequacy of existing antitrust laws and enforcement.”
The outcome of the subcommittee’s report could be recommendations of regulatory or legislative action. For now, we primarily have big tech self-regulation and the E.U.’s Cookie Law. The popups across websites are annoying, and we read those “accept all cookie” notices with the same rigor as an Apple iOS update agreement, but it’s there nonetheless. And it’s a start. It’s a bit of transparency about data collection practice and usage that did not exist in 2016.
Thanks, Russia!
Racism, calls for violence and the fringe unveiled
“The point of a disinformation operation is not to create new rifts in society — it’s to drive the rifts that are there further apart,” says Graham Brookie, deputy director of the Digital Forensic Research Lab at the Atlantic Council, a Washington-based international relations think tank.
As such, troll factories did not invent racism, but they have brought it into the open with campaigns meant to stir controversy with divisive posts. Did they develop the fringe? No. They merely made it “acceptable” with every like, retweet, or threaded comment. This came with a great assist from prominent figures and anonymous forums.
Rather than scurrying like cockroaches to the darkest crevices, trolls allow people to be more comfortable with their hostility. And they were stoking the embers of social mob mentality, granting each threaded reply permission for even greater incendiary commentary. This is further enabled by a blanket of digital anonymity and the ease of rendering opinion by keystroke.
Some platforms began to self-govern, after years of challenges. Trump was temporarily suspended from streaming video site Twitch in June for violating bans against posting “hateful conduct.” And, after years of complaints, Reddit took down a group on its platform that was glorifying violence and misogyny.
No longer in the shadows, we can now see our neighbors, colleagues, and family members openly sharing and commenting on racially insensitive or inflammatory messages. It has become so mainstream that the communications typically reserved for anonymous Twitter threads or behind closed doors on Facebook groups have now spread to LinkedIn. One can see the vitriol that otherwise might have been hidden on professional networks, now playing out in living color — through likes, follows, comments, and retweets.
This too, is a good thing. I need to know if a colleague dons a tinfoil hat at the end of the work-day and believes that Bill Gates developed the coronavirus. It’s important for me to know if he is abusive or combative to other working professionals. I must be aware of my neighbors’ wishes for violent harm on those with opposing views. It’s essential that I know if grandma still believes that President Obama doesn’t hold a U.S. birth certificate.
Thanks, Russia.
So this holiday, kudos to Russia. If not for their organized social media attacks, we might not ever have pulled back the curtain on these previously ignored issues. We might not have required tech companies to conduct more rigorous scrutiny of the user generated content on their platforms. We might still be allowing conjecture to pass for fact. We might not have known that grandma is racist.
Because we cannot understand, talk about, or when necessary, avoid these people unless we know about the beliefs they harbor. It’s not perfect, but it’s a start.
In one final bit of irony, at least one of Russia’s primary objectives from the orchestrated attack to sow American discord has seriously backfired. Consider that this year, nearly 160 million Americans voted in the presidential election, more than any other in U.S. history. Despite being Russia’s “clear preference,” per the DNI, Donald Trump was voted out of office and overwhelmingly by a margin of over 7 million ballots (which was predicted here, by the way).
Thanks, Russia.
If you liked this piece, you might enjoy: