What is interesting to me is that machine learning relies on a small but important error rate in its modeling. If our training of a model predicts 100% accuracy we are forced by learning theory to reject it as “over fitting”. We are assured that our model will not work in the wild. So we know that every system has some “graininess”. Our most accurate models such as quantum mechanics insist on an uncertainty. We can not predict beyond that point. We know there are things we cannot know.
Humans have always used prediction as the verification of knowledge. What is even more interesting is that some of the greatest advances in knowledge came from those who did not ignore the deviation from the prediction. In doing so, they advanced our understanding of nature, proved by more accurate predictions. So yes, machine learning works, but when do we ignore the small errors and when do we investigate?