Reinterpreting the machine

Enrique Dans
Enrique Dans
Published in
3 min readOct 28, 2021

--

IMAGE: A desktop computer exchanging data with a human brain

More and more consumer electronics devices, computers, smartphones, etc. now come with additional microprocessors or specific areas within their microprocessor that use machine learning, or, as many people like to call it, artificial intelligence.

I don’t like the term artificial intelligence; it could be because for years, talking about anything related to the possibility of machines developing functionalities remotely reminiscent of human intelligence, be it playing chess, Go or poker, automatically brought to mind the Terminator robot. In short, I believe that intelligence, as such, is something else, so I tend to prefer the term machine learning, which helps me to better understand what the machine is actually doing: applying statistical procedures to deduce rules from a set of labeled data.

In practice, that’s basically what we’re talking about. For years, we’ve regarded computers as boxes full of programs or rules, generated by third parties or by ourselves — if we knew how to program — which we supplied data with in order to obtain results. Programs, as such, were stable: your word processor, your spreadsheet, your presentation program, etc. did not change their way of doing things once you installed them. They simply received your data and processed it methodically. If you always did things a certain way, they didn’t particularly adapt to your way of doing things…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)