The fake news story that fooled even Maggie Haberman

A screenshot of a (cached) deleted tweet by Maggie Haberman, who first tweeted a story from a fake news website and then deleted it, when her followers explained that it was already marked as fake by Snopes.com.

We all believe we’re too savvy not to fall for fake news stories, but it keeps happening to the best of us, every day. You can be Maggie Haberman, the NY Times political reporter of the day, and fall for it. In fact, the busier you are, the easier it is, because platforms such as Twitter, Facebook, and Google don’t help us quickly evaluate the content we are being served. To Ms. Habberman, a notorious multi-tasker, it happened after Snopes.com had already debunked the story she tweeted about. This is because Twitter, differently from “post-2016 election” Facebook, has yet to start collaborating with fact checkers to surface such useful information to its overwhelmed users.

But, relying on third-party fact-checking might be already too late. In this case, the story was published on June 30, but the Snopes rebuttal came on July 2nd, two days after the story had gone viral on Facebook and Twitter. After two days, most people have already moved on to other stories. What might work instead, as demonstrated by an experiment by Georgetown professor Leticia Bode, is immediately following the story with a link to a trusted source that contradicts the false information. This, however, works when false information has been around for a long time, for example: no, vaccines don’t cause autism, or no, climate change is not a hoax.

What can one do, when a fact-check for a story doesn’t exist yet? I think the solution is simple: the platforms must surface useful signals that make it easy for us to verify the credibility of a web source before we start spreading information. I wrote about the idea of “nutrition fact labels” in a previous post, and the rest of this article explores more signals to display to users at the same time they see a story.

The anatomy of a fake news website

In research dating back in 2010, my colleague Takis Metaxas and I documented the first Twitter bot-enabled political misinformation attack being orchestrated from a website, coakleysaidit.com, that was registered on the same day (Jan 15, 2010) as the Twitter bots which were spreading this misinformation. We showed that an easy way to check an unknown website is to verify its domain registration credentials and look at the timing of registration and entities involved. If someone would have done this (by using the Whois service) before the fake story on the mass grave of tortured black men went viral, they would have discovered that the website, JacksonTelegraph.com was registered on June 23, 2017, only one week before the KKK story publication, in the name of a Bob Smith, living in Singapore (see annotated screenshot below). Even the human checkers of Snopes.com missed this fact. Instead they wrote: the website JacksonTelegraph.com appears to be a fake news website, because its “contact” page provides no physical address, no telephone numbers, no listing of editorial or business staff (or other personnel).

Screenshot from whois.icann.org, a web service that allows to check the credentials of every website. It provides information about the person or company that registered the name of the website, on what date, and for how long. A registration of one year (like the one for JacksonTelegraph) is the minimum registration time, and might be a signal of a short-term business opportunity, often found in scam websites.

Manually checking the origin of a website’s domain costs time and most web users might not even know that it’s possible or what it means. But for the algorithms powering Twitter and Facebook, this check costs a fraction of a millisecond. It should be their responsibility to display to users information that warns about the credibility of a web source. Even an old-fashioned, one-person-operated website, such as Scam Adviser, can tell you to be cautious about a website like JacksonTelegraph, only based on its structural features and without using any complicated AI to fact-check the content of the stories. Why is this technically difficult for Twitter and Facebook? Technology can do these checks for us. It is the platforms that have to decide whether they want to support their users, or burden them with the tasks of manually verifying the credibility of web sources.