AI doesn’t work like our brains. That’s the problem.

Benjamin Keep, Ph.D.
Age of Awareness
Published in
5 min readJun 23, 2022

--

Image by Gerd Altmann from Pixabay.

It’s odd to try to create artificial intelligence without taking inspiration from the only existing natural intelligences we have: humans and animals. Yet that is the current direction of AI research.

Although “deep learning” is often described in the media as similar to “the way human brains work,” nothing could be further from the truth. Deep learning models require massive amounts of training data to be any good. We do not.

That’s why Waymo’s claim about their AI program having driven millions of miles on public roads is so hollow. The program has to have that amount of data to come close to human levels of performance in realistic driving scenarios. And even then, it’s trivially fragile. It can’t drive accurately without insanely detailed maps beforehand. It can’t handle inclement weather. It’s sensitive to very small changes in the environment.

Many machine learning models are simply complicated prediction models applied to massive data sets — closer to “y=mx + b” than they are to human intelligence.

When GPT-3, Flamingo, and similar attempts at “artificial general intelligence” (AGI) are criticized, their proponents focus on how specific errors have been resolved in the next iteration. But the problem is not about specific errors — it’s the way they get things wrong. Look…

--

--

Benjamin Keep, Ph.D.
Age of Awareness

Researcher and writer interested in science, learning, and technology. www.benjaminkeep.com