Using algorithms for scholarly research is more than an academic question

Enrique Dans
Enrique Dans
Published in
4 min readDec 22, 2022

--

IMAGE: A photo of an academic paper with a marker and a pair of glasses on top
IMAGE: Ravi Teja — Pixabay

If there is one thing I’m sure of, it’s that we are going to see the widespread use of machine learning assistants, or more specifically, Large Language Models (LLM), not only to write texts on any topic or create illustrations, but for all kinds of tasks.

That said, Meta has just pulled its Galactica machine learning assistant, which it designed “to store, combine and reason about scientific knowledge”. As has happened before when an algorithm is made available to the public, large numbers of people misused it, requesting for example, articles to justify white supremacy or to argue the benefits of eating crushed glass, as well as how to make napalm in the bathtub.

Again, proof that tools are only as good or bad as the intentions of those who use them. In this case, testing the suitability of an assistant to show that it lacks the minimum rigor necessary to enable unsupervised use is not without value, but we should, before dismissing the experiment, reflect on it: was the idea really to develop an assistant capable of generating text that can be copied and pasted into a Wikipedia article? Or was the original idea, and therefore the expected use, something else?

The first step in any research project is a review of the current literature, a process that not only brings…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)