Let’s be clear about what Geoffrey Hinton is saying about deep learning

Enrique Dans
Enrique Dans
Published in
5 min readMay 5, 2023

--

IMAGE: An illustrated person looks up at a large hazard symbol, which has a character representing data science and AI ‘standing’ next to it. The AI character is made up of nodes and edges (like a network diagram). The hazard symbol is the General Data Hazard from the Data Hazards project, and is a large exclamation mark made up of binary code. Here, the person is considering the Data Hazard and how they could use it to help them manage the risks the AI has been causing
IMAGE: Yasmin Dwiputri & Data Hazards Project — Better Images of AI (CC BY)

Geoffrey Hinton, considered by some the godfather of AI for his work in the field of deep learning, is leaving Google after ten years in order to warn about the dangers of the technology. His departure has strengthened the campaign, as did the open letter from a group of researchers about a month ago, to demonize anything based on machine learning algorithms based on the belief that they pose a threat to humankind.

I repeat what I said a month ago: technologies cannot be uninvented, because if there is a benefit from using them, they will be used. Trying to prevent this is futile. We can aspire to regulate a given technology, but to do that we first need to understand it holistically, with all its possibilities and risks.

Hinton himself, after seeing how The New York Times covered his decision, used Twitter to clarify that he had not left Google to criticize the company, and that, in fact, the company had acted very responsibly with respect to that technology. I said it at the time: Google refused to launch products based on LaMDA

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)