For anyone concerned about the spread of mis- and disinformation, the U.S. midterm elections on November 6, 2018 were seen as a test of whether, in the past two years, we’ve learned anything about how to deal with them.
Good news: we kind of have! Unlike in 2016, this election cycle did not have a huge spike in misinformation.
That’s not to say that there wasn’t any misinformation floating around. Among other things, there were claims by President Trump that illegal immigrants would vote in droves, there was a post from the North Dakota Democratic Party suggesting that state residents could lose their hunting licenses in other states if they voted in North Dakota, and there were ads from a mysterious Facebook group that used images and, in some cases, fabricated quotes, from Green Party candidates to convince progressives to turn away from the Democratic Party.
But it could have been a lot worse. The night before the election, Facebook shut down 115 accounts for suspected “coordinated inauthentic behavior” linked to foreign groups trying to interfere with the midterms. The month before, it removed 82 pages, accounts and groups aimed at stirring up social strife in the United States.
Facebook and Twitter were much more vigilant about stopping the spread of misinformation this election cycle, but the nature of misinformation itself has fundamentally changed in the past two years. As CNN’s Brian Stelter wrote a few days before the election:
“Are midterm voters being fooled by made-up stories? I’ve been talking with experts and scouring social media websites for answers. My impression is that the specific “fake news” problem is less pronounced this election season. But the threats have morphed and multiplied.
Here’s what I mean: I’m not seeing simplistic “Candidate X said Y” lies showing up in the newsfeed. Facebook has staffers and machines and fact-checking partnerships in place to reduce that pollution. Twitter has been taking action too.
But the online environment is still polluted.”
About a week before the election, we decided to take a look at one aspect of that pollution: the most shared midterm-related news stories on social media.
Last month, my organization, the First Amendment Center at the Freedom Forum Institute, in partnership with media start-up Our.News, released an online tool called Newstrition, to help consumers fact-check the news themselves.
Our development of Newstrition was strongly influenced by research that demonstrated that the best fact-checkers read laterally — meaning they evaluate a story’s credibility by getting context and perspective from other sites, instead of just staying within the original website in question. Our goal was to make that context and perspective easily accessible to someone browsing the news online.
Newstrition gives users some quick, verified information about who published a particular article. It also displays the sources linked to in the article and presents other sources that support, debunk or provide context. It also reports whether other Newstrition users think the story is news, opinion, clickbait or satire. One thing it doesn’t do? It won’t tell you whether the article, or media outlet behind it, is good, bad, or trustworthy. One reason for that is because we’re of the belief that making those judgments doesn’t persuade people we’re right so much as it causes them to double down on whatever it is they initially believed. But the other reason is that making those judgments is really, really hard.
More on that in a bit.
Last week we decided to use Newstrition’s tools ourselves to take a look at the most shared articles about the 2018 midterm elections, based on metrics from BuzzSumo and Social Animal. Who were the publishers behind them? What kind of content was being shared? News? Opinion pieces? Full-blown hoaxes? We aggregated general news about the midterms as well as news specifically pertaining to five close Senate races — Arizona, Florida, Nevada, North Dakota, and Texas.
This wasn’t an in-depth analysis of the entire online information ecosystem. We didn’t look at political ads or viral posts and tweets, but limited ourselves to articles that had the most engagements on social media on any given day. What follows are simply my own observations about the types of stories that made the rounds on social media about the midterms in the week leading up to them.
Content from lesser-known publishers can rack up a lot of engagement, even when users have no idea what kind of publisher it is
Unsurprisingly, a lot of the most shared content was from well-known national media outlets like CNN, Fox News, The Washington Post, and Breitbart (or in the case of specific Senate elections, well-established local media outlets). But social media still affords plenty of opportunities for articles from lesser-known media outlets to go viral. That’s not necessarily a bad thing; but it does mean that users aren’t always fully aware of what they’re sharing.
Take this story from the Babylon Bee:
In case you haven’t heard of the Babylon Bee, here’s their Newstrition profile:
According to the metric site Social Animal, the story was engaged with 28,000 times on Twitter and 18,000 on Facebook. Were all of these people aware that this wasn’t a real story and just in the mood to pass along some lighthearted evangelical satire?
Based on this Facebook post, probably not all of them.
Just to be clear: the Babylon Bee isn’t a fake news or hoax site, and it has plenty of fans who immediately knew that this wasn’t a real headline. If you look at its website, it doesn’t really hide the ball about being satirical. (Other headlines include, “Study: 100% Of Elections Are ‘The Most Important Election Of Your Lifetime” and “Rookie Move: Non-Giver Makes Eye Contact With Church Usher.”) But my guess is that it’s not well known enough for everyone to instantly get that it’s a sort of Christian version of The Onion.
Outright hoaxes going viral have been dramatically reduced, but misleading headlines are still going strong
One thing I came across quite a bit were stories where the headlines are a lot more salacious than the actual articles (Within Newstrition, we categorize this as clickbait).
Alas, the actual article’s a lot less exciting than the headline. A quick excerpt:
Biden attacked Senate candidate Rep. Kevin Cramer for comments in which he labeled concerns by farmers over President Trump’s trade policies “hysteria,” adding they “don’t have a high enough pain threshold.”
In quoting Cramer, the former Vice President turned to a thinly disguised threat.
“Your guy calls farmers’ concerns ‘hysteria,’” he said of Heitkamp’s opponent. “Because, he says, ‘they don’t have a very high threshold for pain.’”
“Well, I’ll get the president of the trade union up here. He’ll show him a threshold of pain,” Biden said.
Three days. That’s how long it took for Biden to morph from Uncle Joe, the man who just wants to bring dignity and civility back to politics, to UFC Joe.
The publisher behind this article, the Political Insider, has occasionally been called out by fact-checkers like Snopes and PolitiFact for publishing false information. So, it’s not the most reliable site. But in this case, they’re not misquoting Biden (this is one reason why it’s tricky to make a definitive judgment on whether a publisher is trustworthy or not). Is it a stretch for them to characterize Biden’s words as a real threat? Sure, but I don’t think the author’s actually trying to establish that Biden’s a homicidal maniac. The point he’s trying to make, in his own words, is this:
“The entire [Democratic] party consists of a bunch of wannabes desperate to pretend they’re just as bada** as Trump, while simultaneously pretending they are the party of civility. They are neither civil nor convincing as tough guys.”
Whether you agree or disagree with that premise, the story’s more political commentary than it is fake news. Which is fine, but…the headline promised me an ARMY of GOONS!
I’d say that a more accurate headline would be, “Joe Biden’s comments about showing Rep. Cramer ‘a threshhold of pain’ at odds with his rhetoric about civility.” But then again, nothing I’ve written has ever been shared 2,000 times.
Another example of a misleading headline, this one aimed at a Republican candidate, and with less in the way of all caps:
As the full news article explains:
As governor, Rick Scott has been very critical of the “brutal and oppressive” Nicolás Maduro regime in Venezuela, including calling on state investment fund managers to sever ties with firms that do business with that country.
“Any organization that does business with the Maduro regime cannot do business with the state of Florida,” Scott said in Miami last summer.
But Scott and his wife Ann held substantial investments as recently as last year in three firms that have done business in Venezuela: Goldman Sachs, Invesco and BlackRock.
This sounds like hypocrisy of the highest order! But the article goes on:
“The governor…put his assets in a blind trust after he took office in 2011, a decision that was supported by the state Commission on Ethics but which is being challenged in a Florida court.
“The governor had no role in selecting those investments,” said a spokeswoman for Scott’s campaign, Lauren Schenone. “The blind trust is managed by an independent financial professional who decides what assets are bought, sold or changed. The rules of the blind trust prevent any specific assets or the value of those assets within the trust from being disclosed to the governor, and those requirements have always been followed.”
The headline is technically true, but it’s misleading. What it really should say is, “Rick Scott told Florida not to invest in companies linked to Venezuela, but he did, through a blind trust.” Maybe that’s still a problem, but it’s a different one than what’s implied by the current headline. This isn’t an example of bad journalism — the full article provides this nuance and the author appears to have gone through Scott’s 125-page financial disclosure to get it, a fate I would not wish on anyone. The only problem is that nobody actually reads full articles anymore; research shows that 59 percent of the links shared on social media have never actually been clicked.
That leads to another related phenomenon.
Hyperpartisan news may be the toughest problem for platforms, and for all of us
Hyperpartisan news is an interesting thing. It’s not fake news, per se — the events aren’t fabricated, although they’re often sensationalized and viewed through a very specific lens. Take this article from Information Liberation:
Information Liberation is a website that specializes in conservative takes on the news and critiques of the mainstream media. But you probably could have guessed that. The article excerpts and comments on an article published by the Miami Herald that interviewed green card holders and undocumented immigrants volunteering to help get out the vote: “These immigrants can’t vote — so they’re working hard to influence those who can.”
You may take issue with Information Liberation’s characterization of manning a phone bank as election meddling, or of the Miami Herald as a stand-in for a monolithic media. You can argue with those points — but you can’t really debunk something that’s essentially just opinion. As Claire Wardle, the head of First Draft says, “[C]urrently there is little the platforms can do with his type of content. It can not be fact-checked in a formal sense and some would argue that this type of content is ‘politics as normal’. What we don’t know is how to measure the drip, drip, drip of these divisive hyper-partisan memes on society.”
My guess would be that the impact of these divisive hyper-partisan memes on our society isn’t great. And it looks like Russia agrees with me! According to a former NSA official (now a cybsersecurity threat analyst), “Russian accounts have been amplifying stories and internet ‘memes’ that initially came from the U.S. far left or far right. Such postings seem more authentic, are harder to identify as foreign, and are easier to produce than made-up stories.”
For the record, I don’t think the solution is for Facebook to start banning hyperpartisan content from its platform. (Where would they draw the line between “hyperpartisan” and “acceptably partisan” content? How would they figure out whether something is divisive versus providing a valid minority viewpoint?) I’m of the opinion that the way to fight misinformation isn’t to censor it so much as give consumers the tools they need to assess its reliability for themselves. Normally I’d say that the solution would be to let consumers know that a publisher has a certain political bias, just so they can keep that in mind. But in cases like this, where the publisher wears its bias on its sleeve, are consumers actually looking for reliability?
The types of articles I’m talking about don’t have misleading headlines. Their headlines are perfectly in sync with what’s in the full article. You can probably predict exactly what it has to say without even clicking on it.
And that might be the point. This sort of story isn’t really meant to be read. It’s meant to be shared on social media, as a kind of badge of who are you are and a signal to others about where you stand.
-A lot of the most shared content was horse race journalism, meaning extensive coverage of polls and reports of one candidate or another being up a point or down a point. I understand why news outlets produce it — as I said, it gets shared a lot! — but is this useful to anyone? (This is a real question. Do candidates use this information in their campaign strategies? Does it impact voting behavior?)
-Looking at news related to Florida’s Senate race, this article about Jimmy Buffett hosting a free concert to support Bill Nelson and Andrew Gillum got 1,100 engagements on Twitter, 7,800 on Facebook and 22,000 on Reddit. I don’t have a larger point to make. That just seems like a lot.