Machines are there to be told what to do, not asked

Enrique Dans
Enrique Dans
Published in
2 min readNov 28, 2023

--

IMAGE: A drawing of a woman lying down in front of a laptop and thinking of a cute robot
IMAGE: Alexandra Koch — Pixabay

Carmen López, a journalist at Spanish news site El Diario contacted me to contribute to an article in Spanish (pdf) about the anthropomorphization of artificial intelligence algorithms and why so many people treat ChatGPT and the like as if they were people, saying please and thank you, and whether this is really what the companies that run them want us to do.

The chat format used by OpenAI and others is obviously intended to encourage conversation, because what the Microsoft-owned company was interested in when it put it launched its generative AI service a year ago was to attract as many people as possible to use it, so as to continue to training it. Since then, millions of people have begun treating it like a therapist, marriage guidance counselor or MD. If the format had been different, I’m sure it would not have had the global impact it has enjoyed. Since a chat format suggests conversation, anthropomorphization is the result, and saying good morning or thank you is something we do out of habit or politeness to those around us (some people also do so because they believe machines will take over the world and will remember the humans who were nice to them :-)

However, there is a problem: treating an algorithm as if it were a person can prevent us from asking it for things in the way we should if we’re going to get the right…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)