Why it’s time to start learning how to use large language models

Enrique Dans
Enrique Dans
Published in
4 min readMar 26, 2023

--

IMAGE: A computer monitor with a human-like figure and, behind it, a much larger screen with a lot of data interconnected, symbolizing an algorithm
IMAGE: Gerd Altmann — Pixabay

As massive language models (LLMs) improve and offer features such as being able to analyze images, use “eyes” and “ears” along with carrying out new tasks, the age-old fear of new technologies raises its head: job substitution.

Little wonder that people fear for their jobs when algorithms such as ChatGPT or Bard can answer questions or carry out tasks such as converting ideas into executable code, synthesizing a conversation, developing a presentation.

The reality is that LLMs are a very limited subdivision of a much more important field, and as such, are limited to analyzing documents, establishing relationships between them, and extracting information from them in a conversational format. As impressive as this may seem and as much as it may lead some to think that algorithms can think, they are still far from representing anything even minimally comparable to the complexity and capabilities of the human mind. Indeed, a well-trained LLM with the knowledge base of a company could answer customer queries convincingly, and when the selection of materials for its training is properly organized, the likelihood of algorithms providing bizarre answers is considerably reduced.

However, as said, this ability corresponds to a certain type of work, but not to all. Programming for example…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)