With regard to sample size, small sample sizes in studies that are taken at face value is an issue.
Rococo Modem Basilisk

I don’t think we’re disagreeing. Indeed, you even mention sample size as an issue! The “systematic distortion of science” idea proposes that the roots of bad science is in the perverse incentive structure — rewarding papers and the ability to secure funding for research.

This incentive structure likely pressurises people, whether they are aware of it or not, to do bad science: omit negative results, not sufficiently check positive results, not do a rigorous statistical analysis of their data (including e.g. use enough samples). So the argument goes that the “replication crisis” is born of a system that has the wrong incentives.

I think there is a lot of merit in this argument. See e.g. the modelling work of Smaldino & McElreath here

But my point is that science has always had, and will always have, many wrong studies. It’s inevitable. Take a random biology journal issue from 1953. I’d wager that pretty much every paper in that issue is either wrong or correct, but trivial. Thus the current “crisis” is not rooted in the sudden discovery that papers cannot be replicated.

Rather, my implied argument is really that we are undergoing a shift in what science will tolerate. High-profile wrong papers are wasting huge amounts of time and resources in e.g. wasted clinical trials based on faulty animal model studies. (And it is the perverse incentives of science that are perhaps leading researchers into these expensive blind alleys more than ever before, especially when so much money is specifically targeted at “translational” studies). The crisis seems to be a collective consensus that we cannot tolerate this any more. And rightly so.

Some evidence of misleading studies:



Like what you read? Give Mark Humphries a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.