The Fallacy of Scientific Consensus
Why we must listen to the science, not the scientists.
I have been involved in a number of arguments about scientific consensus. The most recent debate has convinced me to write about the topic in depth. The idea of scientific consensus has been popular since reports that 97% of all papers offering a position on climate change assert that climate change is happening. I am not going to address the validity of theories on climate change. But it is important to point out a number of issues with relying on consensus among scientists. First, peer reviewed publishing is dominated by a handful of authors. Consider the following statement from the abstract of “Estimates of the Continuously Publishing Core in the Scientific Workforce.”
Using the entire Scopus database, we estimated that there are 15,153,100 publishing scientists (distinct author identifiers) in the period 1996–2011. However, only 150,608 (<1%) of them have published something in each and every year in this 16-year period (uninterrupted, continuous presence [UCP] in the literature). This small core of scientists with UCP are far more cited than others, and they account for 41.7% of all papers in the same period and 87.1% of all papers with >1000 citations in the same period.
Basically, the work of roughly 1% of all publishing scientists account for 41.7% of papers published between 1996 and 2011. From this information alone, we know that any analysis of published articles is going to be skewed heavily towards the bias of 1% of the publishing community.
Aside from the “Academic 1%” there are a number of other biases that are expressed within the academic community. According to “Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data,” the “publish or perish” phenomenon, where academics are required to publish in order to keep their job, seems to result in a bias towards “positive results.” Studies which are inconclusive or are not consistent with the theory being addressed are thrown aside and focus is on papers that have “positive results.” This is doubly problematic as the goal in science is really to falsify a theory, rather than try to support it.
Neil deGrasse Tyson, in a tweet, stated that “anyone who thinks scientists like agreeing with one another has never attended a scientific conference.” This is fairly accurate. However, in at least some cases scientists also do not like expressing views which are very far from consensus. One of the most iconic examples of this situation was the feud between Newton and Hooke. While most people who have taken a science class are quite familiar with Sir Isaac Newton, fewer people are aware of the once prominent Hooke. Robert Hooke was the President of the Royal Society before Newton. Hooke viewed light as a wave. Newton viewed it as a particle. While light is now viewed as both wave and particle, this debate was problematic at the time. Newton actually waited until after Hooke died before publishing some of his work on the topic. That is how strong the fear of “retribution” for bucking the trend was.
Now, all of this together is not a falsification of the ability to measure robustness of a theory using scientific consensus. But it certainly is enough to question why people have so much faith in consensus. If I wanted to show that consensus was not a valid measure, I would need to actually provide statistically significant data. If someone else wanted to show that it was a valid measure, they would need to show evidence as well.
There is also a philosophical argument against the validity of consensus, even among experts. It has to do with the reason why appeal to authority is reasonable. Appeal to authority is often seen as a fallacy. But it is only a fallacy when the person is not reasonably considered an authority on a topic. For instance, if you argued that the Earth was flat because your parents told you that it was, that would be an appeal to false authority. If you argued that the Earth was round because a NASA astronaut who had been to space said it was round, that would be a reasonable appeal to authority.
But what makes appeal to authority valid at all? It has to do with expertise, or at least the assumption of expertise. When appeal to authority is used in a valid sense, a person is assumed to have full knowledge of the topic, and that they will honestly admit any gaps in the available information on the topic. Because of the assumed completeness of knowledge, an appeal to two authorities would have no additional information.
Even without this assumption, there are problems with relying on consensus. In response to my discussion, Jeremiah Traeger asked
Under a Bayesian prediction, if nine out of ten dentists tell you that you have a cavity, are you more or less likely to have a cavity? If nine out of ten doctors tell you that you have cancer, do you seek treatment? If a survey shows that 97 out of 100 actively publishing climate scientists state that global warming is occurring, what do you take from it? — A Tippling Philosopher
These questions are all interesting, but there is no single answer. The interesting point is that the author did not seem to care. The question alone seemed to act as some kind of justification in his mind. But to use Bayesian inference, we need to make a number of assumptions. We need to know something about how knowledge is distributed between individuals. Does each individual have knowledge that the other person does not? How much? Even if there is a difference, it may be so small that after a few experts are put together, there is almost no change in additional knowledge. So just referencing Bayesian inference, as if it somehow provides justification is a non starter. We need to know more information.
I find it disturbing how many people take scientific consensus at face value. The idea that we can measure the robustness of a theory based on how many scientists support it is interesting, but is not tested. And the position is itself a falsifiable statement and therefore, like all potential theories, should be tested before any claim on the topic is made. Until then, we have only one option: look at the data and the theory. See how the theory matches up with actual observations. This can be done by reviewing meta-analyses.
When I first wrote this piece, I did not include one example of consensus without evidence. I have been researching the efficacy of B. pertussis vaccines for some time. While there is a great deal of consensus on the efficacy of the vaccines, scientific data is not actually in line with this consensus. Studies that show efficacy conflate efficacy at preventing disease with efficacy at preventing infection. Multiple studies have found evidence against the view that the B. pertussis vaccines actually help prevent the spread of infection. Yet these studies are largely ignored by the medical community and there is no attempt to confirm them. Indeed, it seems like Pertussis could be nearly an epidemic but is going undetected because the majority of infections are asymptomatic. To see why, here is my analysis of a Chinese study, which could be conducted in the United States.
This discussion has now been going back and forth for a while and I find it interesting to see the responses I get. One of the most interesting is seeing a classic shifting of burden of proof, from someone who is supposedly well versed in philosophical discussions. The following is from a rebuttal against what I have said.
To Quoque, my friend
One of the biggest criticisms against SA is that his own criticisms can be levelled directly back at him. He states things like:
“I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is a valid measure of the robustness of a theory.”
The thing is, I can reverse this:
I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is NOT a valid measure of the robustness of a theory.
He keeps ramming home this notion that we need an empirically evidenced piece of research to show that consensus is a good indicator of “robustness”, but fails to see that for the negation of this, he also needs to offer evidence. Because what is really happening here is pro-consensus is asserting something, and “anti”-consensus is asserting in rebuttal.
This falsification charge can also be levelled at his claim, too. You cannot confirm either claim, only falsify them. Which one gets falsified?
Now, Tippling statement would be reasonable if I actually said that scientific consensus was not a valid measure of the robustness of a theory. However, I never made such a claim. I responded to the claim that it was a reasonable measure, and demanded satisfaction of burden of proof. Therefore Tippling’s statement is just an attempt to make me defend my dismissal of an unevidenced statement.
Since initially writing this article, I have come across many other demands for relying on scientific consensus, and I have also come across another interesting problem. Academia has largely become a system of doctrine. The cult like nature of academia can be seen in answers on a Stack Exchange question and the response to my answer. The question posed was whether or not to publish a result which contradicts an existing mathematical result that has already been published. My answer is “yes.” The order in which academia received the results does not change the validity of either result. Only the result matters. If no error is obvious, after review, then both results should be thought of as being just as sound. To say otherwise (1) introduces doctrine and (2) suggests that somehow the probability of a result being correct is somehow determined by the order in which it was received. While such an idea is absurd, it is an idea that seems to be prominent and an idea which sways consensus: regardless of the validity of the result, any result which is inconsistent with consensus is placed under more scrutiny and therefore requires more evidence than it would if the result was published first and it became the accepted “truth.”
Originally published at spiritualanthropologist.info on September 14, 2017.