
Knowledge isn’t Rational: Reflections on Artificial Intelligence and Riding a Bicycle
Say you’re riding a bicycle and you want to turn right. Which way do you turn the handlebars first?
The answer is left.
Ask most people this question and they’ll get it wrong. Which is interesting because we all know how to ride bikes, yet, somehow, our minds don’t.
If we tried to teach someone else to ride a bike, we’d tell them the exact wrong thing to do. And if we let our cognitive brains tell us how to ride a bike, we’d crash into a wall.
One of the classic criticisms of machine learning is that we don’t really understand what the machines are doing. That is, the trained models are fundamentally intractable. Worse, we choose models on intuition — for example, neural networks mimic the human nervous system, in a massively over-simplified way, which felt like a good place to start. And it worked.
Machine learning is unscientific by definition. The Bayesians are running amok, and they can’t even explain how it works.
That criticism loses weight when you realize that humans learn the same way. We take an infinite amount of sensory input, plus lots of outcome information (e.g., falling or not-falling off your bicycle), and throw it at our human machines. Learning somehow emerges.
Learning appears to stand outside of our rational processes. We learn from raw data, and then, later, we try to understand what it is that we have learned. The rational part comes second.
I wonder what other things I know how to do, but my mind does not.
