I am very much in agreement with your broad argument, but there are some potential challenges, if I were to play a devil’s advocate.
The demand for new data is, like it or not, (semi-inadvertently) defined by the models that we have. This is true for practical applications of algorithms and this is true of how individuals think about the universe. We like the data that produce reliable “predictions” when put through our models. If the results are too noisy, we are too happy to declare the data problematic and the error-filled results “wrong” and a “waste.” This is, at the level of the individual, how we refuse to truck with people who don’t agree with our way of thinking (our models, if you will), and how, at the level of organizations, insist on sticking to the old algorithms and old data.
But, in terms of cost-benefit analysis, this attitude is somewhat justified: revising the model is risky. The moving parts of the model need to be rethought at considerable cost. The expected returns are not obvious (both high variance and small mean). It may call for collection of new, difficult-to-obtain data. New approaches for analysis need to be conceptualized, with some inevitable “failures” along the way where the expected results do not materialize (assuming that the metric of relevance is “expected results” — but this is what “results-oriented thinking adds up to.) All these point to a somewhat disturbing implication: if you are focused on (short to medium term) results only, introspection without a clearly defined goal or purpose — and a search through the noise has to be exactly that, as you don’t know what exactly you are expecting to find — is a “waste.”
Morally, philosophically, this is not true. But it is, given the constraints of life. Everyone has to abide by short term priorities: this quarter’s profits, competitors’ PR blitz, deadlines, journal reviewer’s insistent demands, kid’s braces, car payments, and other “immediate necessities of life.” Rather than inquire about how to conceptualize the model differently and taking time to go about thinking things through, we are often forced to hurry things along with shoddy models, even if we would like to do otherwise. Algorithmic thinking, with its focus on objectives that can be reduced to a point measure (beyond even “quantifiability” — we can quantify variability, and we have a long tradition of insurance industry and its methodology built on quantifying entire distributions among other things.) provides incentives for abandoning nuanced thinking. (A joke being, to an electrical engineer under budget constraint, the answer is always 1 or 0. It takes more than one observation for an answer other than 1 or 0 and you can’t afford more than one observation.) A lot of potential problems, bigger than just mindset (even though they are, really, rooted in a particular mindset.)