Algorithms and sensationalisms

Enrique Dans
Enrique Dans
Published in
3 min readMar 9, 2015

--

When Larry Page and Sergey Brin came up with the brilliant idea of adapting the academic world’s citation index to create a search engine the results of which were based on social relevance, in other words how many other people were linking to it, Google solved the problem that all search engines had faced until then: that algorithms based on content could be manipulated by website owners. But they failed to address a related, important, problem: that while many pages garner links because they are objectively good, many others do so simply because they are sensationalist.

Sensationalism is one of the evils of our times. The problem is that, sadly, gaining attention through scandal mongering, at any cost, works.

We seem more interested in scandal and gossip than truth and objectivity. An algorithm originally designed to prioritize sites through a word or a couple of words has gradually been revised, first to include the social networks as indicators, and then to be perverted by click-baiters and attention pornographers, with links further circulated by millions of users on the basis of scandal and intrigue, or simply to pages that are filled with lies and conspiracy theories, but that get linked, regardless of how negative they are.

On the basis of traditional algorithms, Oscar Wilde’s celebrated comment takes on a new meaning: there is only one thing worse than being talked about, and that is not being talked about.”

Finally, Google seems to have decided to create an algorithm based on different criteria. For more than 16 years, nobody has been able to come up with another way of measuring relevance other than by other people’s criteria, but technology now offers us an alternative: we can apply semantic analysis to links or their context that enable us to differentiate positive from negative or neutral links, allowing us to measure other questions such as clickthrough to assess if the reaction of somebody who finds content actually uses it or moves on immediately, especially when they realize that the page is simply clickbait.

This is a change that could affect the development of the entire internet, as well as raising the question about the implications of a single company, in this case Google, deciding what is “true” and what is not. If Google got this far on the basis of a simple algorithm, the possible impact of this change to its algorithm — affecting the introduction of new metrics among the first criteria, those with the biggest influence over indexation — could be enormous. We would no longer be talking about links, retweets, or sharing at any cost, but instead about putting quality first, about links with context, about metrics that encourage us to look for quality. Based on early observations, Google’s algorithm has already begun to show signs of change, and we have left behind the 2013 version, which was very much based on sharing via the social networks, to the version in 2014 that introduced into the results — on the very top — metrics such as clickthrough rates.

Needless to say, none of this is simple, and for the moment, we’re not talking about a solution, but rather the search for the right one. In the meantime, we need to be thinking about how all this will affect our company, and how we will notice any effects when it is applied. This is, without doubt, a topic that’s worth taking a little time to think over.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)