Most research I publish will be wrong. And I’m OK with that
Mark Humphries
63414

Well in computational science especially for neural networks, there is a lot of hard competition between competing concepts, such as CIFAR-100, kaggle or similar competitions. Through these mechanisms practical concepts are selected. In other fields this is different and sure researchers should always consider that a paper is wrong. But if you look at how Research is done in contrast to the past, it is no surprise so many papers either without real content or with plain wrong data do exist. You have to publish, you have to publish many papers and there is something like “when doing a PhD you should publish at least 3 papers”, of course the impact factor of the journal has to be high etc. etc. Young researchers find themselves in an environment, where it is rewarding to publish many papers over a single paper with concise and good results. Citation index is an ill-driven reward system. It rewards you, if you do half the statistics but publish twice the number of papers. Furthermore I have seen papers of computer simulations, obviously wrong, but cited often by people who published contrasting results. As a consequence the paper was cited a lot, mainly allocated the time of other scientists showing it is wrong and in the end wasting a lot of time of many researchers. Of course it is not completely wrong, since useful results are also cited a lot and therefore quality can be found if something is cited over years and years. Still, even H-index, which is designed to solve these issues, has these problems. People like Max Planck, Galois or Einstein, have really low H-indices. Not because their work was of less importance, but because they did not publish unfinished results and boiled down the important things to a few papers.

Like what you read? Give Patrick Wieth a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.