PageRank was an integral part of Google when it started out. Larry Page’s idea (PageRank: geddit?) was a modification of the citation index he had come across when preparing for his doctorate’s major field exam, and that identifies the most relevant authors. PageRank was pretty much an exact copy applied to the web of this academic index, whereby the relevance of a paper is established via the number of times it is cited by other academics in their bibliographies, and in turn the relative importance of those academics themselves.
In essence, PageRank was a simple way to keep webmasters’ hands off relevance metrics, preventing them from manipulating them. As long as metrics were based on the criteria in each page, search engines soon became generators of spam based on key search words, ridiculous attempts to hid terms and words on pages so that search engines would offer them up to people, and any number of other dirty tricks
In short, PageRank made all this much more difficult, creating the SEO multi-million dollar industry in the process, and by including links from social networks into its criteria filled web sites with invitations to share content on other networks, while encouraging sensationalism and clickbait.
It was obvious what was going to happen: if you convert the number of times others have shared or linked what you have just said into the be all and end all of metrics, then you can hardly be surprised if the outcome is sensationalism.
In later versions of its algorithm, Google has downplayed the importance of social networks, at the same time as working on a new metric, Knowledge-Based Trust score, or KBT. To develop this criteria,based on evaluating pages on how trustworthy their content is and not on the number of incoming links they generate, Google amasses a network of trustworthy pages, elaborating a series of criteria it considers objective, true, and dependable, and compares each page with those accepted criteria with the aim of eliminating sensationalist pages full of nonsensical factoids or lies, or garbage, or that are just plain wrong.
The idea of a KBT score is certainly interesting from a scientific standpoint, but dangerous in the hands of a company that will not only have its own criteria about what is trustworthy or true, but also a history of accusations it has manipulated algorithms to its own benefit.
What would happen if, in the same way that Google manipulated its own algorithms to downgrade certain competitors to the detriment of its own users, the company ended up in a position in which it was able to influence the way we assess certain issues?
The idea of a world in which pseudo-sciences like homeopathy or conspiracy theories to do with chemtrails, or the anti-vaccine brigade are consigned to the web equivalent of the garbage can is certainly attractive, but what happens if political views that Google might find bothersome are also put out with the trash?
Are we prepared to blindly trust Google, or whichever search engine, to only use utterly objective algorithms and to not manipulate them? And would Google, or whichever search engine, be prepared to act with the transparency needed to banish any suggestion of manipulation?
Since 1999, Google’s internet relevance has been based exclusively on social factors. In the meantime, we have seen how other search engines have been playing the same game, albeit less successfully, or have simply gone out of business while a new type of sensationalism has invaded the web. Using other criteria seems like a good idea if we don’t want to spend the rest of our lives bombarded by dumb lists, misleading headlines, paid links, and social spam.
If Google doesn’t change its algorithm, we won’t ever be sure we can believe what we find on the internet. But trying to change it and establish science and truth as its criteria, will be no easy task and we can expect a few sharp corners along the way…
(En español, aquí)