Facebook stands out against Google in the fight against fake news

Enrique Dans
Enrique Dans
3 min readMar 6, 2017

--

Facebook has launched a mechanism to label disputed news based on fact checking from the likes of Snopes, PolitiFact or other sites that accept a series of operating principles.

Facebook’s move coincides with the strong criticism that Google has endured due to the low quality of the snippets it provides in response to certain searches, and that are not subjected to even the most elementary verification process, referring to sources with zero credibility or sites with intensely partisan views, treating them as sources of supposedly reliable information and considering them as valid answers.

Facebook’s approach is to check items against a number of proven sources for a comparison of facts and to then label accordingly. The problem is that the checking process is not fast enough, so that by the time the verification teams have been able to carry out their checks rigorously, the item has been disseminated and the damage, to a large extent, already done. That said, it is possible to complement this with user evaluations, and it can be used to designate the quality and reliability of sites according to the number of times the news they publish are disputed (which would eventually become a disincentive to publication), as well as the progressive improvement of machine learning algorithms that can learn to label these items not as news, but on the basis of their distribution, following a similar approach to that used to detect patterns of fraud.

In the case of Google, the fundamental problem seems to be that it is not using any type of intelligence: the snippets that the searcher highlights in response to certain searches seem to come directly from some type of popularity ranking, and consequently tend to highlight sensationalist or controversial sites. The examples leave no doubt: the search engine prominently highlights conspiracy theories against searches such as “Obama plans a coup,” or comes up with jokes worthy of Monty Python that leave the company looking very bad, especially after its general manager, Sundar Pichai, repeatedly stressed that “his artificial intelligence was above that of his competitors.”

All this has been made worse with the launch of its conversational assistant, Google Home: obviously, a conversational assistant that just reads out the first 10 answers on a SERP (Search Engine Results Page) isn’t much use and will instead be expected to choose one of them and reply using its voice interface. But since it systematically chooses the item highlighted in Google’s own snippet, and the mechanism by which that snippet is obtained lacks the intelligence sufficient to discern whether the result comes from a reliable site or a biased page, Google Home often comes up with conspiracy theories or other such nonsense, completely undermining the credibility of the company and its technology, particularly when one bears in mind that users of Google Home tend to be early adopters who know their onions.

If Google, a company with eighteen years’ experience, cannot come up with results good enough to be read by a machine and supplied as a single answer to a question, indicates the intrinsic difficulties involved here, as well as the credibility issues if not corrected quickly and efficiently through a reputation mechanism for publications that can be included in snippets that go beyond manual monitoring on a case-by-case basis as they appear in the news. Artificial intelligence depends on the quality of the data with we feed it: if we feed Watson with the Urban Dictionary, it will soon lapse into slang and swear words. If Google allows its results to be contaminated with sensational or biased pages of poor quality, its answers will be worthless.

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)