Las Vegas strip shooting (IMAGE: Alex Canton)

Hold the front page!

Enrique Dans
Enrique Dans

--

The terrible events in Las Vegas on Monday evening that have so far left 59 people dead and more than five hundred wounded, once again revealed the difficulty of obtaining reliable news in the era of so-called information overload.

In the first hours after the tragedy, Google and Facebook were awash with all kinds of false news, hypotheses and unquestioned assumptions about the identity of the killer, unfounded rumors, witch hunts and forums filled with dubious opinions. Google found that an algorithmic error had classified 4chan as a reliable source and was distributing its content through Google News and Google Now. Facebook highlighted a forum created just after the story broke created by somebody with zero credibility simply because of the large numbers of people participating in it. The apologies of both companies, very similar, were unsatisfactory and effectively “blamed the machine”. It seems that it is harder than ever to trust non-traditional news sources in this hyper-connected age.

Less haste and more speed should be the order of the day when a fast-moving story is breaking. But when significant advertising flows come into the equation, some media and social networks are prepared to adopt irresponsible strategies. Following the use of Facebook by Russian-funded Trump supporters during the US election campaign, one solution could be higher levels of human supervision, at least while the algorithms are getting up to speed, but it is not clear that the way breaking news is spread responds to repetitive patterns that allow these algorithms to provide a response. Each event has different interests involved in spreading false news, based on sources of all types and that it might not be possible to check (a bystander with a smartphone), and for the moment at least, algorithms are not able to distinguish fake from real news.

We are constantly checking our smartphones for Facebook updates or for news on Google Now. We should remember however, that none of these services employ journalists to write, edit or evaluate the news they distribute, which is gathered from third parties using algorithms. In theory, this approach should result in a more varied information supply, but in practice we find ourselves confronted by a maze of false and sensational content that contribute to an endless, unsupervised loop, until somebody finally intervenes and calls a halt to the madness.

The chaos on the social networks in the aftermath of the killings in Las Vegas was no isolated event: this has happened time and again in recent years. Just about anybody can photoshop a shark swimming along a highway or post a picture of victim of police, marital, or whatever violence that was taken several years ago, that will then be distributed through the social networks. The faster we can access news, the less we seem prepared to use our critical faculties.

Can algorithms be trained to develop critical judgment, at least to eliminate stories that no self-respecting journalist would have approved? Or is the only solution for Facebook and Google to start hiring reporters and editors?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)