Not sure if I am helping avoid the echo chamber — as it seems that we are actually pretty much on the same page. :)
The big problem that I see is that what we should be doing — even if for our own long term good — and what we are incentivized to do are diverging. Algorithms/statistics/models (really all different components of the same thing) are NOT very good for the long run predictions— too many moving parts, too complex a reality, etc. They still do provide big pictures that are still useful framework for thinking — if we want to step back and think a bit deeper, but the incentives are often lacking.
In the short term, for well-defined, reasonably simple problems, the algorithms/statistics/models are pretty good in terms of predictive power. If we are led by where we have the good predictions, we are even more encouraged to think short term, because, there, we can define clearer (even if misleading) metrics and justify ourselves. This, to me, is the essence of the problem with “algorithmic” thinking: we let the metrics define the problems, rather than the problems themselves. So we get latched to the (easily) quantifiable and reject the complex problems, and if we get paid, figuratively or literally, for telling people in clear quantifiable terms, why should we even pretend that complex problems even exist, knowing that they are not easily reducible to simple answers? But, by the same token, what means have I to convince people that I have solutions to their complex problems (not “I” I in this context, of course.)
In a sense, this seems to be a deeper problem: do we have a “right” to simple answers? Many of us think we do, in a peculiar perversion of Popper — parsimony being the hallmark of a good theory, except, in case of Popper (and good statistics), it is conditional parsimony, simplicity conditional on explanatory power, not simply absolute parsimony. If we want to convince people that they need to think deeper, it appears necessary that they need to be convinced of the need for a better explanatory power even at cost of a more complex model. This seems to be the big challenge, since, sometimes, people really do need a better explanatory power, even if they may not know it. Sometimes, they really don’t, and they are right to be skeptical of overly complex model being sold to them.