A small revolution is taking place in science: the rise of registered reports

Jop de Vrieze
15 min readOct 30, 2018

--

After a number of scandals, social scientists decided that their field had to be changed. One of their assets is a concept called ‘registered reports’. But not everybody is convinced of its value.

Meet cognitive neuroscientist Chris Chambers.

When he started his academic career in 1999 at Monash University in Australia, he imagined science being a pursuit of objective truth. Results of experiments and findings would be very important but whether they would be exciting or not wouldn’t determine whether one would be a good scientist or not. The criteria would be: Are your ideas good, do your theories make sense, are your studies well designed, rigorous, providing interesting results no matter what comes out of it?

Quite soon after he started his PhD he learned that reality is different. His first manuscript was immediately rejected. The study was well designed and executed, but the results, he had to admit, were only moderately interesting. In other words: boring. His supervisor knew, but let him find out: results do matter.

Over the course of the following months, he learned from his supervisor what science is about: story telling using data. And he became quite good at it. He published high impact papers, got large grants, had an increasing reputation and Hirsch factor.

As he said to me in an interview, a couple of months ago: ‘You would get a results, explore the data a little bit, take a result, and build the whole paper around it. The way you get to it was obscured, not transparent to the reader. It was not fraud, it was the grey area. Everybody was doing it like that.’

He remembers a blog post he read a few years ago, discussing a question: are scientists more like detectives or lawyers? Is your ultimate goal to find the truth or win an argument? He learned as a scientist you are a little bit of both. The lawyering bit is very important for career success.

A field in crisis
But then, in 2011, things changed in the field of psychology. First there was the famous Daryl Bem, who published a paper in which he seemed to have proven people can predict the future: He thereby involuntarily showed that there was something terribly wrong with the standard methods in psychological science.

Using similar methods, methodologist Uri Simonsohn and colleagues published a tongue-in-cheek paper in Psychological Science ‘showing’ that listening to the song When I’m Sixty-four by the Beatles can actually reduce a listener’s age by 1.5 years. Showing the flaws in the methods.

A couple of months later things got even worse, when Dutch social psychologist Diederik Stapel was exposed as a fraudster, who had made up experiments and results for years, without anyone finding out.

And there were other widely covered incidents. Science, or at least psychological science is broken, many people said. At least something was completely wrong.

And this was not just happening in psychology. Already in 2005, the now famous epidemiologist John Ioannidis of Stanford University in the US, published a paper in PloS Medicine titled: why most research findings are false. The main causes, according to Ioannidis, are poor research designs, unreliable techniques and lousy statistics, not to mention bias, fraud and corruption. Since then, Ioannidis has been traveling the world in his battle against ‘sloppy science’.

In 2010, a colleague of Ioannidis, Daniele Fanelli showed that across fields, positive findings are more likely to be reported, resulting in a persistent problem called 'publication bias'.

Other publications showed that many of these positive findings would later prove to be false positives, because of several reasons: p-hacking, collecting a lot of data and picking only those that show a positive result, which increases the chance that this is a coincidence and not a real relationship. HARKing, Hypothesizing after the Results are Known: adjusting your research questions to your findings, thereby turning exploratory research into hypothesis driven research.

By way of comparison: the chance that you happen to meet a random acquaintance on holiday is much greater than coming across a specific person whose name you have noted in advance. Harking is something like writing the name of that particular person down after meeting and pretending you did that beforehand. P-hacking is something like drawing up a very long list of names of people you might come across, and removing all the other names after you bumped into one of those people, and then claiming that it ‘really is not a coincidence’.

The concept of registered reports
So Chris Chambers got back to where he started as a graduate student: putting effort in making science as objective and reliable as possible. He started thinking about which part of his work he actually believed was true and which part was engineered. A rather sobering experience. And he started thinking about what he could do to improve his field.

While he was in the middle of that process, he came across a blog published by Neuroskeptic, in May 2011. It was a rather short one, titled: How to fix science. In the blog, Neuroskeptic proposed a radical concept that would change the way science is organized and, the blogger predicted, will not please everyone in the community. Neuroskeptic would be proven right. Here’s what the mysterious blogger wrote:

“Scientific papers should be submitted to journals for publication before the research has started. The Introduction and the Methods section, detailing what you plan to do and why, would then get peer reviewed. The rest of the paper would obviously be a blank at this stage. Anonymous experts would have a chance to critique the methods and rationale.

If the paper’s accepted, you then do the research, get the results, and write the Results and Discussion section of the paper. The journal is then required to publish the final paper, assuming that you kept to the original plan. The Introduction and primary Methods would be fixed — you can’t change them once the data come in.

You can do additional stuff and run additional analyses all you like, but they’ll be marked as secondary, which of course is what they are. Publication would therefore be based on the scientific merits of the experiment, the importance of the question and the quality of the methods, not the “interestingness” of the results. If you want a paper in Nature, it needs to be a great idea, not a lucky shot.”

This was the concept of registered reports.

A radical concept which, as Chambers would later find out, had been around since the fifties. But despite the fact it was common sense, it was far from common practice. The idea didn’t let go of Chambers.

In the meantime, Chambers was making friends among colleagues who had also been questioning the current research practice, particularly in psychology. One of them is Daniel Lakens, an experimental psychologist and methodologist at Eindhoven University in the Netherlands. Lakens is know to be outspoken and critical, even dubbed crusader of open science — alongside with Chris Chambers. During his PhD project, it struck him that scientists did not choose their methods using a sophisticated rationale, but just because they were told to do so. For instance, one of the things he had never learned was to determine systematically, on the basis of calculations, how large the sample should be. “Why did no one tell me that? We usually just did something.”

Another colleague Chambers got in contact with was Jelte Wicherts, who was about to move from the University of Amsterdam to Tilburg University, where he established his own group studying statistical and methodological problems in the social sciences.

Among the PhD students working in Wicherts’ lab was Chris Hartgerink, who had been enthused for science by none other than Diederik Stapel and had embarked on a PhD project developing methods to hunt down fraudsters like his former role model. But in the meantime, he had become disillusioned by all these other problems that were not caused by downright fraud.

The solution Hartgerink proposes is radical transparency. According to Hartgerink, there is no place for trust within the scientific community: every little detail should be available for scrutiny. Increasingly, scientific journals and institutes are demanding that scientists publish their raw data, materials, code and procedures online when they publish their results, so that others have the opportunity to check and use them for other, exploratory research. ‘It is so logical', he told me. 'Science must be verifiable. I understand that sometimes things can not be shared, but there must be a good reason for that.’

Another talent in this research group is Michèle Nuijten. She developed, among other things, a method for filtering statistical errors from scientific articles, Statcheck, and studies ways to make overview studies more reliable. ‘It is important that we continue to conduct research into how science can improve,’ says Nuijten. ‘At one point there was a lot of argument, but there was no crumb underpinning.’

A pioneering country
The Netherlands is a pioneering country, Chris Chambers realized. And not only radical youngsters in psychology are making a difference. In 2013, four professors started an initiative aimed at improving science: Science in Transition. The driving force behind Science in Transition is Frank Miedema, former HIV researcher and now vice dean at the University Medical Center in Utrecht.

A focus of science in transition are the perverse incentives on scientific publishing. Modesty and realism are not the qualities that scientists are rewarded for, at least that’s what Joeri Tijdink thinks, psychiatrist and meta researcher at the Vrije Universiteit in Amsterdam. In a recent article in PloS ONE he listed personality characteristics of scientists. According to him, many researchers suffer from what he calls, tongue in cheek, Publiphilia Impactfactorius — an obsession with scoring top publications.

Journals are rewarding bad practice by favoring the publication of results that are considered to be positive, novel, clear and eye-catching. In many life sciences, negative results, complicated results, or attempts to replicate previous studies never make it into the scientific record. Instead they occupy a vast unpublished file drawer.

This influences how the results are written down. The text should give a factual representation of the scientific facts, but what appears from research that Tijdink carried out with Utrecht colleagues: the articles have become more and more like advertising brochures over the years. They discovered that in the past forty years, words such as novel and outstanding in the summary of articles were up to four times more common than in the seventies. ‘That rhetoric seeps through to the public and is often exaggerated and simplified in the meantime, while nuance is needed’, says Tijdink.

So how should incentives be changed? Many scholars are trying to find solutions to this. An example of this is the Center for Science and Technology Studies (CWTS) at Leiden University, where an entire research group works to improve the indicators of quality and impact that are monitored and on which scientists are accounted for.

One important is the Hirsch index, which was defined in 2005 by the physicist Jorge Hirsch and measures the so-called citation impact of a researcher. The H-index is the number of articles N of an author that is more often quoted than N times. Thus, if a researcher has seven publications cited respectively 40, 27, 13, 12, 9, 6 and 5 times, his or her H-index is 6. If a researcher has 45 publications cited at least 45 times, her or his H-index 45.

University directors and managers are fond of such indices. They count in university rankings and are useful for the selection of researchers for their labs. But there is also a lot of criticism on those indexes and rankings. This ‘indicator logic’ is often bad for science, says Sarah de Rijcke, a professor at the WCTS who researches the influence of these indicators.

That is why in 2015 De Rijcke and her colleagues presented ten principles to use the indicators so that they do reward quality and encourage good behavior: the Leiden Manifesto. Among other things, they argued for using the indicators as an aid in policy and selection, not as a decisive factor. They also put emphasis on indicators adapted to the mission of the institute or the research group: someone who does fundamental, academic research has different goals than someone who tries to solve social problems.

They also pleaded to make a distinction per discipline when using the H-index. Researchers working in fields where less is published and cited, have on average a lower H-index. And last but not least: researchers, administrators and policy makers need to realize that indicators not only measure, but also direct certain behavior. They can lead to undesirable practices. Therefore they should continually be adjusted every now and then.

Start of a revolution

In the meantime, Chambers had started his own effort to improve science. He discussed the idea of registered reports among his colleagues, but did not really know where to start. In 2012, the editor in chief of the journal Cortex asked him to join his editorial board. Soon after, he proposed the idea of adding an option to split the review process in two stages: one to assess the study idea and methods, two to assess if they did what they said. Once accepted in stage one, the results would not be weighed in stage 2 as a criterion for publication. He argued this would eliminate a lot of problems, mainly publication bias, confirmation bias and reviewer bias.

He did not expect to be applauded from the start. Actually, he even expected to be sacked. He knew it was a risk to introduce registered reports, because although he believed in the concept, he wasn’t sure if he wasn’t overlooking details or perversities.

But the editor in chief decided to go for it, despite resistance among his own editors. In May 2013, the option to preregister studies to be published in Cortex was launched. Chambers tried to convince editors of other journals to follow. Two other journals launched the concept alongside Cortex, but things went too slow. Some of his colleagues witnessed quiet resistance to pre-registration from other journals. These outlets, among other things, feared that agreeing to publish papers before seeing the data could lock them into publishing negative results or other findings conventionally regarded as “boring”.

Together with Marcus Munafo of the University of Bristol, Chambers decided they had to go big on this. They wrote an open letter to the Guardian, signed by over 80 editors that supported the concept of pre-registration. The guardian editors were saying: is this really important?’ It was quite niche to them. But they were proven wrong. In the summer of 2013, the article was published. ‘In retrospect, that really drove it’, says Chambers.

Not that everybody was happy. Actually, there was quite some push back as well, both online and at conferences when Chambers shared his experience. Chambers still remembers that conference in France where an eminent professor walked to the microphone and shouted: ‘You are killing science!’ Nobody responded.

Many experienced researchers are afraid pre registering their work will stifle research. ‘Scientists like to think of worst cases: am I gonna have to pre register everything? Will I need to pre register this sandwich before I eat it? ‘So much dark and no light.’

Chambers disagrees: exploratory analyses will still be possible, but will be named as such.

What happened then was the academic equivalent of cold calling. Occasionally, he would visit journal editors. Often, he would have intensive email contact with them. Some of them feared that once a study would be accepted, the authors would no longer put any effort in it to do a good job. Chambers explained that there would be quality checks built into the protocol. Anyone failing to meet these standards, would run the risk of not getting accepted in phase two of the review process.

Chambers got his message into many departments, by not announcing himself as the speaker who would introduce his colleagues into the concept of registered reports, but to present his scientific work and half way his presentation, switch: ‘And here is something I would like to share with you.’

And boy they came thick and fast, the comments. After a while, he adjusted his presentation: when he gave a talk in London in 2015, 75% of it was Q&A to him self. It got to the end, there were some but not many questions left.

Once you take away the misconceptions, they can see what it really is about, Chambers figured out. But still, things went too slow, Chambers found. He decided to write a book, which was published in 2017: the Seven Deadly Sins of Psychology, in which he sums up what’s wrong and how it can be reformed. The book is a must read not just for scholars in psychology. Many of the practices are not so much about psychological science, but about the psychology of science. The book includes the Q&A list about registered reports. I will not provide them all, but recommend you to read the book.

Many of the solutions proposed in the book, are already being implemented. Indeed, so it seems, scientists are succeeding in improving their field, by making it more rigorous and transparent.

Especially young people are really open to this, says Anne Scheel, a German psychologist who moved to the research group of pioneer Daniël Lakens University of Eindhoven, because there was more room for open science than at her German faculty. According to Scheel, improving her field is the only morally correct thing to do. 'We can no longer justify working on the old, questionable way.'

And speaking for myself as a science journalist: I myself and colleagues have become more critical of research methodology, and both the authors of ‘breakthrough papers’ and their colleagues seem to mention more often that replication will have to show how robust the new finding is.

Is there a reproducibility crisis?
It is an exciting time, says Dan Quintana. Two years ago he started a podcast with his colleague James Heathers: Everything Hertz. Initially their topic was physiology, but soon, they shifted to research methods and open science. It’s fun to see how it evolved, says Quintana. ‘The great things were getting off the ground around that time.’

In 70 episodes so far, they’ve discussed scientific journals, transparency, clinical trials an pre-prints. Of course, Chris Chambers features in an episode as well.

But in many other fields, things are moving slower. And there is resistance as well. Some of it is expected: the winners of the current system have no interest in changing it. Others are questioning the whole concept of a reproducibility crisis. On of them, Daniele Fanelli, used to be publishing about what was wrong with science, but changed his mind and expressed his new opinion in PNAS. As he told me in an interview: ‘If science would really be in such a deplorable state, I would write it down, but it isn’t and so I am trying to tell the truth about the evidence we have.´

According to Fanelli, the current shift to open science is not necessary because science is broken, but just adjustments to the age of the internet. He adds that a negative image of science, resulting from all the attention that is paid to the ‘reproducibility crisis’, is unnecessary and can even be damaging.

Counterfeit narrative
What narrative of science should we be we presenting, wonders Kathleen Jamieson Hall, professor of communication at the University of Pennsylvania wonders in the same edition of PNAS. She argues we need less talk about counterfeit and ‘science is broken' and more about discovery quest: science delivering answers to important questions. But isn’t that what the powerful PR machine of science is delivering day in day out? This is a dilemma of both scientists and science journalists.

But to a certain extent, they seem to have a point. This pessimistic narrative can be abused: recently, the National Academy of Scholars, a conservative think tank, published a report using the crisis in psychology to claim that climate change is a hoax. And the EPA, which is now being led by a Trump appointed, pro industry director, uses its ‘transparency rule’ to benefit industry.

Another problem is that while all these developments seem promising, they still are marginal. The fact that psychologists and some life scientists are showing the way, means that other fields are lagging behind. But despite all this pushback, things really seem to be changing and yes, we seem to be entering a new era.

Uri Simonsohn recently published a review article entitled ‘Psychology’s renaissance´. Dan Quintana of the Everythinghertzpodcast sees it as well. Yes, we’re in a bubble of Open science adepts, but looking at the popularity of our podcast: it’s increasing, with an average of 2000 listeners per episode, and still increasing. At this very moment, the whole community is at Grand Rapids, for a meeting of the Society for the Improvement of Pyschology. Why not set up such a society for every discipline?

And what about registered reports? He is optimistic too. The community gets behind it more and more. First, registered reports were a niche concept, people sometimes come up to me: do you know of this concept of registered reports? Things are moving into the right direction. He can see it from the number of journals (over 120) and the number of papers registered.

Prominent critics such as John Ioannidis are warning that concepts such as registered reports should be as evidence based as the science it is trying to achieve, so they should be studied and optimized before being introduced. This is already happening, showing, among other things, that pre-registring leads to an increase in 'null findings' — which means there's a reduction of publication bias.

Everyone, Chambers including, is well aware of the fact that registered reports should not become universal, because the concept is not suitable for all types of research. ‘There will always be a place for exploratory research,’ says Chambers.

So it’s too early to sit back and relax, says Chambers: ‘I’m just trying to push this as far as I can now that its moving. If you do not do it, it will die of attrition. It is like pushing a boat up a really fast flowing river that is flowing the other way. If you don’t keep pushing it is gone, so you’ve gotta maintain momentum. The nice things is that over the last few years we’re seeing this growing community of open science people and we’re all pushing together. Eventually the river will change direction..’

--

--

Jop de Vrieze

Freelance science writer based in Amsterdam, Science Magazine, New Scientist and Dutch media. Using science to critically reflect on society and vice versa.