How are we going to stop machine learning assistants from spreading fake news?

Enrique Dans
Enrique Dans
Published in
3 min readMar 4, 2023

--

IMAGE: A graffiti in a white wall with the word FAKE in purple
IMAGE: Markus Spiske — Unsplash

A race is underway to incorporate machine learning into search engines so they can answer queries with a paragraph as well as a list of links: the pioneer, the still relatively unknown You.com, was joined by Bing thanks to Microsoft’s agreement with Open.ai, while Google is experimenting with Bard. And on Friday, Brave Search announced an AI summarization feature that isn’t based on ChatGPT.

Meanwhile, ChatGPT says it has overcome some initial problems, as well as providing easier access from other countries such as Spain, which, together with its integration in more and more search engines, will likely see even more use around the world.

Noting this possible change in the usage model, The Atlantic raises an interesting question: what happens to the results that search engines offer about a person and that are possibly false, misleading or malicious, defamatory or based on conspiracy theories, when those results are included in a well-written paragraph.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)