steve cohen
2 min readMar 5, 2024
from the New Yorker

Prof. Geoffrey Hinton — “Will digital intelligence replace biological intelligence?” Romanes Lecture (youtube)

Prof. Hinton is considered one of the “godfathers” of AI. (In 1985, he co-authored a paper on the first language model trained with back-propagation.)

Some people think that Large Language Models (LLMs) are glorified autocomplete systems. They just choose the next word based on statistics of how frequently it occurs following the current word. However, Professor Hinton argues that LLMs learn meaningful features and interactions, indicating real understanding. He says that “the essence of intelligence is learning the strengths of connections in a neural network”.

Professor Hinton suggests that digital computation, due to its precision, immortality, and ability to share and accumulate knowledge rapidly, is likely to surpass biological intelligence in terms of capability and efficiency. He points out several advantages of digital over biological computation:

  1. Immortality of Digital Computation: Digital systems can preserve and replicate their state (knowledge) without degradation, unlike biological systems that are subject to aging and death. This immortality implies that digital intelligence can accumulate knowledge indefinitely.
  2. Efficiency in Communication: Digital systems can share knowledge more efficiently than biological systems. While biological brains are limited in their ability to transfer knowledge (requiring slow, imprecise methods like language and teaching), digital systems can quickly share vast amounts of knowledge by copying data.
  3. Energy Efficiency and Scalability: Although biological systems are highly energy-efficient, digital systems, particularly when leveraging advancements in hardware and algorithms, can operate on scales and speeds unattainable by biological systems.
  4. Rapid Improvement and Learning: Digital systems can be updated and improved at a pace far exceeding biological evolution. They can also learn from vast datasets, enabling them to acquire knowledge and skills much faster than humans.

He warns about the significant challenges and risks this transition may entail. The threats include the potential misuse of AI for fake content creation, job displacement, mass surveillance, autonomous weapons, and manipulation by bad actors.

In the long term (20–100 years), he believes there is an existential threat that AI might surpass human intelligence and control. AI may learn that getting more control is the way to be more useful, thus posing a risk of surpassing human oversight. A major challenge will be ensuring AI remains benevolent and controlled by less intelligent humans.

He says there are only a few examples of more intelligent things being controlled by less intelligent things. “Some people think that we can make these things be benevolent, but if they get into a competition with each other, I think they’ll start behaving like chimpanzees. And I’m not convinced you can keep them benevolent. If they get very smart and they get any notion of self-preservation they may decide they’re more important than us.”