Science’s shameful secret

At the frontiers of scientific discovery, there is a growing problem. Can we still trust our scientists?

Vic Parsons
7 min readApr 28, 2014

Everyday, checking; everyday, hoping. When the acceptance email from the journal finally arrived, a feeling of elation.

“Getting your first paper published in a major journal is a career milestone, and seeing all those months of work in the lab finally in print is so satisfying. But there’s a pressure, too — I had to get this paper published, because my work needs funding.”

It was this pressure that means she won’t let me use her name, for fear of repercussions. For a young evolutionary biologist, research is paid for through grants from a research council. There are seven in the UK, which together fund around three billion pounds of scientific research every year with taxpayer money.

Science is everywhere. From the medicines in our cupboards to the laptop on which this is being written, it is scientific research that drives society forwards. We pay for it with our taxes and it is the gateway for our tomorrows, or at least so we are told. Yet we also rely on science in another way, on a much more fundamental level: we trust scientists to check.

To rigorously check their work and that of others, so that the scientific research that is published is as close to truth as science can ever get. Which means that the researchers must be honest about how they’ve got their results, their experiments must be replicable, and the uncertainties, inherent in science, laid out clearly. It’s interesting that the people we trust the most are doctors, but even this most trusted group have the General Medical Council (GMC) that holds them to account. But who holds scientists to account? They describe their autonomy as self-governing and self-correcting, but are they just protecting it?

In January, two ‘ground breaking’ papers from Japanese scientists detailing a new method of making stem cells was published in Nature. Less than 40 days later, there were calls for the paper to be retracted. Corresponding author Teruhiko Wakayama told NHK News, “I have lost faith in the paper. Overall there are now just too many uncertainties about it.”

This is the crisis science is facing: it is not the first time this has happened.

In 2004, a veterinarian from South Korea called Hwang Woo-Suk published the first of two papers in Science. Claiming a world first, Hwang and his team said they had cloned a human embryo and extracted stem cells from it. A year later, the second paper detailed how they had created human embryonic stem cells which genetically matched specific patients, raising hopes for treatments of degenerative diseases.

But Hwang’s results were fake. In 2006, Science retracted both the papers. Hwang was indicted by the South Korean government for fraud, misuse of state funds and violating the laws of bioethics.

Though a published paper must pass peer review, there are no ‘science police’ on the look out for fraudulent research. Peer reviewers are other scientists, experts in their field, and they use the methodology and the data to see if the results are feasible. In the case of Hwangs paper, the data themselves were fabricated.

In 2009, Hwang was convicted of embezzling research funds and illegally buying human eggs for his research. He made no comment as he left the courthouse.

“Science is just like every other human endeavour — no one likes to be questioned, no one likes to have their dirty laundry aired in public.”

So Ivan Oransky said last month on the BBC Radio 4 programme Inside Science. Oransky and Adam Marcus, managing editor of Anaesthesiology News, run a blog called Retraction Watch which catalogues the retractions of scientific papers and examines claims of irregular research.

“There was and is a value in looking at how science polices itself — which sometimes is not very well at all,” says Marcus. “The number of retractions is increasing independent of the number of publishers.”

Retraction can relate to a variety of issues with the research results from scientific misconduct and plagiarism to the most serious of them all, fraud.

“We need to stop fetishising journal articles as the ultimate marker of a scientists value,” Marcus says, adding that the entire catalogue of a scientist’s work is important, as are reproducibility and transparency.

While it is irrefutable that fraudulent research will not yield scientific truth, not all published science is true or even right — but this does not make it wrong. The very nature of science is a progression, a testing of the boundaries from one idea to the next with the ‘truth’ ever elusive. In this sense, results can never be truly ‘right’. But, as Retraction Watch perhaps highlights, some can be more wrong than others.

There is something tangible that scientists can use to show their research is credible. If someone else can use their methods and repeat the experiment to yield the same results, the research becomes stronger.

In 2005, PLoS Med published a paper by Dr John Ioannidis called ‘Why most published research findings are false.’ The problem, Ioannidis found, was that it is simply too difficult to know what the truth is in any research question. Peer review doesn’t help with this, but replicating the results can.

“The frontier of science is noisy and almost always wrong. There is no culture of replication and no incentives to replicate the work of others, and by not rewarding the process of checking work a massive problem is being created,” says Dr Chris Chambers, senior research fellow in cognitive neuroscience at Cardiff University.

The problem is particularly apparent in the life sciences.

“The current system values novelty in findings,” says Chambers. “While particle physics has a culture of replication, this is still a huge problem in life sciences. The pressure to publish, often and in high impact journals, has an exacerbating effect. This doesn’t have to be the way science progresses.”

The Japanese stem cell paper described a simple method — using an acid bath to turn mature mammalian cells back into an embryonic state — of making stem cells without using embryos or human eggs. Following its publication in Nature several scientists tried to reproduce the results without success. In this way, science has self-corrected to some degree, with the paper now under question following the concerns raised.

In this way, post publication peer review — which benefits from many more sets of eyes than pre publication peer review — can be invaluable. Peter Aldhous, freelance journalist, says, “It would be a forlorn hope to expect peer review to catch sloppy or fraudulent work. Peer review is needed, but it’s not the gold standard for research.”

Previous retractions have not happened so quickly. Famously, it took the Lancet 12 years to retract the Andrew Wakefield paper, which created a false crisis of confidence in the MMR vaccine by suggesting a link to autism. The regulatory body here lacked teeth, and it took a serious eight-year investigative effort from journalist Brian Deer before the fraud was exposed. The paper was finally retracted after the GMC found Wakefield guilty of breaches of the ethics code and dishonesty.

If a topic is exciting, other scientists will try and replicate the work. “There are relatively few cases where journalists have led the way, though the Brian Deer investigation is one,” Aldhous says. “The broader question is not about peer review, which is inherently flawed, but about whether the scientific paper will be the way we communicate science in the future. Will we end up with a publication based on the original code and data used instead?”

In a strange twist, Nature journalist Richard Van Noorden revealed in February that 120 computer generated papers, made using the random paper generator SCIgen, had made it into the published conference proceedings of major journals including Springer. Ruth Francis, UK head of communications at Springer, later confirmed that the papers had been peer reviewed before publication.

SCIgen was created by Massachusetts Institute of Technology researchers in 2005 to prove that conferences would accept ‘gibberish’ papers.

Remarkably, this follows on from a ‘sting’ last year by Science journalist, John Bohannon, who submitted 304 versions of a made up paper to open-access journals across the world. More than half of the papers were accepted, despite having been peer reviewed, with Bohannon saying that though the paper was scientifically credible it contained grave errors that should have easily identified it as “flawed and unpublishable”.

The publication of these fake, spoof papers should speak for itself. There are problems at every step in the way science verifies itself.

Conversely, the most recent survey of public attitudes to science in the UK (the Ipsos-Mori poll) found that 90 per cent of respondents said they trusted university scientists. 71 per cent thought scientists were honest.

This puts scientists in a much better light than science journalists, who were believed to check the reliability of research they write about a mere 28 per cent of the time. The problem, Chambers says, is the “noisy interface between science and the media.”

“There’s a pressure on researchers to publish frequently, there’s a pressure on press officers to make that research sound as exciting as possible. Then, sometimes, something goes wrong,” he says.

Despite the questions surrounding the way research is verified, there is a distinct lack of public alarm.

--

--