Trash science isn’t stopping.
We can want it to stop but it won’t stop. We can come up with all of the automated ways to tell if data is fraudulent, non-replicable, non-reproducible, poorly-designed, or just stupid; but as with all things fraudulent or scammy, the scammers are always one step ahead. You can’t detect a problem that you don’t know exists yet. So while we can come up with tools to extract data from graphs and then run tests on it to see if it’s statistically plausible, these tools will only pick up on known deviations. AI is great at learning within known parameters. Face recognition is horrible at ordering pizza — until someone realizes that people want their phones to recognize when they want pizza from the look on their face.
And while I encourage the people who are creating these tools to create them, trash science is not going to stop just because these tools exist.
And even if, by some improbable miracle, we could stop poor science that can be detected by automated means, we still can’t stop poor questions. A perfectly executed study on a poor question makes great raw material for paper maché.
Peer review, in its “ideal” form is a great system. But we do not exist in an “ideal” publishing world. We do not exist in a world where peer reviewers are created equal. We do not exist in a world where peer review is infallible. Peer review, for any journal, is only as good as the weakest reviewer in their reviewer pool.
“Fix the system!” we decry. And yes, wouldn’t it be great if pressure to publish, incentives to falsify data, rewards for Just. Publishing. Something. Anything, didn’t exist? And maybe someday we will get there. Maybe someday we will get to a place and a system where data is data.
Today is not that day. Next year is not that year. This lifetime might not be that lifetime.
So you can keep wishing for someone to solve this problem or you can learn to tell the difference for yourself, by asking better questions and expecting more of your science.
Learn more at http://criticalmass.ninja