Does Google now have the monopoly on truth?

Enrique Dans
Enrique Dans
Published in
3 min readApr 26, 2015

--

From its origins, Google has set itself apart from the competition. As is known, its search engine’s origins lie in academia, in an algorithm that assigned relevance on the basis of the number of references made to a particular page. The secret of its success is being able to analyze human-generated links on the basis that web sites linked from many important pages are important.

This concept, on which Google has put together a complex theorem with innumerable variables that continually modifies up to six hundred times a year, worked spectacularly in differentiating Google from other search engines: it gave users the feeling that the search engine really reflected their interests in the search process, unlike its competitors.

That said, there is a fundamental flaw in relevance being determined by social variables that we might call sensationalism. As content creators learned how to manipulate Google’s algorithm the web became awash with clickbaiting, bizarre headlines, listicles and all kinds of sensationalist trickery aimed at capturing likes, prompting retweets, and trigger incoming links. Besides that, certain sites that were publishing polemic issues and attracted all sorts of negative links were topping Google’s search pages. Clearly, not all links are worth the same.

Google’s efforts to avoid this progressive corruption of its algorithm seems to be headed toward balancing social relevance with quality, which in turn has produced the Knowledge-based Trust, or KBT, defined in an academic paper. This allows for a link’s trustworthiness to be established on the basis of data as defined by their main subject, incorporating traditional external metrics as well as the content of the page itself, assessed on an objective basis. A couple of useful articles to understand better what is going on can be found in SEO Skeptic or this post on Aaron Bradley’s Google+, whose work on the topic is outstanding.

The idea of establishing the relevance of a page on the basis of the quality of the information therein is a very interesting one and potentially beneficial. Search results for sensationalist pages will drop, and links to more reliable sources will increase. Obviously, this will require very sophisticated probabilistic models able to assess how correct information is or relevant to the search, but on the basis of technologies such as machine learning, this is clearly within our technological capabilities.

An advance of this nature is a major game changer. This is going to affect how we find things out, what incentives exist to elaborate information in one way or another. Properly used, it could mean a major step forward for humanity. Imagine a world in which people who lie or peddle false information suddenly have no voice. Think about the enormous challenge that this means for people whose job is to create information, from the media to individuals, and let’s imagine the effect that this could have on relevance or personal branding: quite simply awesome.

So what’s the downside? As far as I can see, the question we have to ask ourselves is who will watch the watchers? A system of this kind will never be reliable if it is under the control of a single company, able to manipulate it for its own purpose. It’s power is such that it needs to be supervised by some kind of neutral organization. Let’s face it, we know what Google is capable of: the company has used its position to gives its own services greater visibility; then there are the accusations of scraping, or stealing information from competitors’ websites using its position as a hegemonic search engine, in order to improve its own offerings.

In short, few people will want to put the oracle in the hands of an outfit whose ethical standards leave much to be desired.

Nevertheless, Google has the talent to develop this kind of technology, and it also has the resources to set it up and develop it commercially. Which prompts the question of the company’s monopoly. What will happen if Google manages to develop what we might call a truth machine; able to search millions and millions of results and pick out only those that were relevant and correct? What kind of controls would be required to oversee such a system, and who would do so?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)