A Strange Kind of Truth
Or, Hoax Spreading on Social Networks and What To Do About It
--
My fellow digital identity professionals will frown at this, but I have to admit it: I spend far too much time on Facebook.
What can I say? I love it! I love keeping in touch with my buddies back in Italy, read about the friends I’ve been making around the world, share my silly panoramic pictures and so on.
That’s not to say that I don’t have few gripes with it, though.
Most are pretty mild: friends still inviting me to events in Pistoia, Tuscany (I live in Washington state, USA since 2005); the various “93% will not repost this in their wall” update types; and the inevitable invites to games & other apps I totally don’t care about. A bit annoying, but harmless.
Some others, on the other hand, drive me up the wall. The worst offender is, by far, the blind reposting of patently false information: hoaxes, urban legends, you name it. It enrages me the way in which it’s/its, you’re/your substitutions enrage the syntax-conscious.
Urban Legends on Social Networks
Classic examples of the most damaging hoax types:
- kid had a terrible accident and needs blood, if you are type X contact hospital Y now!
- government official/institution/public figure earns more than he deserves/holds on to outrageous privileges while others are suffering injustice
- discovered a miraculous cure for disease X/devised a perpetual motion machine/ cold fusion in a pressure pot/etc etc but the government/corporations/illuminati/etc do not want anybody to know
Urban legends (ULs from now on) have been with us for as long as humankind had memes (in the Dawkins sense). If you are interested in the mechanics of what makes a good urban legend, I strongly recommend reading Made to Stick from the awesome Heath brothers.
ULs have been travelling from mind to mind though all available communication means: orally, via letters, fax, emails, even SMS.
The rise of social networks supplied what is likely to be ULs’ perfect medium. They connect people who are likely to live in the same filter bubbles (political orientations, distrust in institutions, similar education levels, common background), sharing the same confirmation biases hence extremely likely to like and endorse the same content. Their near real-time communication capabilities, reinforcing behaviors (“look how many likes this got!”), mind boggling scale/reach and rich multimedia features are icing on the cake.
If you are into math check out Networks, Crowds and Markets for a formal introduction to how info diffusion works.
Unfortunately, as the spreading efficiency rises, so does the damage caused.
You think I’m exaggerating, it’s all in good fun and whatnot? I disagree. Apart from the fact that perpetuating & diffusing falsehood is bad in itself, some of the ULs have direct, immediate consequences. The blood donor request story can overwhelm an hospital’s phone lines for YEARS after the rumor started, damaging who is actually in need; news of miraculous cures can give false hopes to people already suffering and even prevent them from pursuing the more mundane but effective traditional treatments, with devastating consequences; food scares can ruin specific categories, perhaps already struggling; and so on.
Could Social Networks Do Something About This?
I am not sure I understand why perfectly decent people choose not to invest 20 seconds on a search engine to rule out the possibility that what they’re about to share is total bull. I guess that for many is not really a choice: they might simply not be familiar enough with the Internet to realize that this is even a possibility. I know that’s the case for some non-computer savvy friends. Others might find the opportunity to indulge in their confirmation biases (check out Thinking Fast & Slow, you’ll love it) simply too appealing to resist.
This is why I believe that social networks themselves would make a great contribution to the common good if they would provide some experience to help both sharers and readers to get a better sense of the trustworthiness of a story.
Now, make no mistake: I am perfectly aware that I am being just an armchair ranter here. The problem of assessing (and asserting!) the Truth level of anything is a conundrum both from the technical and the ethical perspectives, and there’s a sizable portion of stories where it is inherently not possible. I don’t expect anything I am writing here to be an immediately viable solution.
And yet, and yet… intuition suggests that there are blatant, factual inaccuracies that can be highlighted. If a story contains the alleged salaries of officers in charity organizations, and the numbers are known to be wrong (and perhaps have been wrong for the last decade, if the hoax is a long-lived one) there must be a way of making sharers and readers aware without necessarily solving the truth assessment problem for every possible story.
Here there’s some random ideas:
- If the URL (or a sizable chunk of text) from a story has an entry in a well-known hoax chaser repository (snopes.com, hoax-slayer.com, versions for specific locales like the excellent http://www.attivissimo.net/, etc.) decorate the story with a warning and a link to the entry, both at creation and display time.
The presence of an entry in one of those web sites does not constitute an automatic certification from the social network about the veracity of a story: it just informs users of the presence of such an entry. It is up to the reader to assess if they trust the hoax chaser entry more than the story itself.
Although the system would not be completely immune from abuse, my intuition is this would work great in most cases. - Allow people to “claim” a story if they prove they have direct involvement, and to decorate it with whatever descriptive text they want to add. Example: days (weeks?) ago I’ve seen on Facebook a story about a revolutionary cure for cancer, which got a staggering amount of shares. Drowned in the ocean of thousands of comments, there was one entry from one guy that claimed to be one of the authors of the study referenced by the story; his comment attempted to contain the hyperbolic claims in the story, provided a more realistic framing & links to deeper/less sensationalist material, and begged all the various sharers to stop diffusing the story as it was creating false expectations. If that comment would have been available at top level, together with story’s body, I believe that a lot of pain could have been spared to a lot of people.
There is a catch, of course. I am not sure how feasible it is today to assess beyond doubt that somebody has the right to “claim” a story. While we wait for the Singularity to blossom, this might still require the judgement of a qualified human. That does not scale too well, and would not work every single time. On the other hand,I suspect this would require less work (and stomach) than what must be already happening for weeding out gore & porn content. - Decision markets. The Graph’s excellent expressive power would be put to good use by providing some more sophisticated assessment knob than the plain like, so that readers could contribute their opinion about whether the story is credible or not.
Besides the brute-force “voting” mechanisms, which would be based on sheer numbers and would be possibly swayed by partisan interests rather than genuine veracity considerations, one could also add typed links (e.g. something expressing the relationship [new link] -is-a-rebuttal-of->[original story]).
…and even crazier ones, that unfortunately this margin is too narrow to contain.
I will close my rant with a meta consideration. I understand that for a lot of people sharing outrageous stories which confirm their world view is actually part of the fun of using a social network, that this helps strengthen the ties in their communities/clusters, and so on. For those people, measures like the above would be a major party pooper which should come with a big, easy to find opt out button.
Social networks are businesses, hence they must cater to the needs of their users and keep them happy. However I also believe that we are already past the threshold at which those services are res publica, a legitimate new way in which we relate to each others as individuals. Such a formidable position presents great opportunities, like augmenting people’s ability to navigate the information they are presented with. The suggestions above are not an attempt to patronize users: everything would still be user generated content, all communications would still be user to user. Rather, it would provide us with a precious chance to occasionally venture out of our filter bubbles. But above all, to restate my far less noble personal interest in the matter, I would finally stop to quarrel with my friends whenever they post outrageous @#$% :-)
