P-value

“The matrix of Absurdity and Nonsense” we are in

IP Yadev
Clinical Epidemiology and Research
3 min readDec 15, 2020

--

I was attending a zoom. Someone was teaching P-values. I was just thinking this time I may hear a correct interpretation. I thought he would now set right the nonsenses promulgated among us by those without formal training in statistics. The alarming truth was these researchers really believed in the nonsense they were teaching. Those misconceptions about the fundamentals were so ingrained that they could easily explain those absurdities with conviction in simple language with picturesque examples

But I was wrong. It was just one of those innumerable occasions I hear those utter absurdities. However there was a difference this time, it was from a statistician. He definitely knows the original interpretation.He was just trying to explain the meaning in simple language to doctors.

This reminds me of the quote by Greenland.

“Statistics made easy is code for statistics done wrong.”

Nevertheless , there was a difference between him and others. Unlike others without formal training, he knew the concepts clearly,but failed in the process of simplyifying those to us.Most often statistics taught by non-statisticians turns out to be simplified nonsense; beautiful and profound statistical concepts and interpretations downgraded to absurdities.

The question remains whether we need to be taught the correct concepts in rather not so simple language by those with formal training in statistics or the wrong and absurd interpretations explained in simple language by those without formal training.

During the last twenty years I heard a lot of nonsense related to interpretation of P-values.The interesting aspect was most of these blunders were explained in simple convincing language interspersed with ample examples. Funny twist was the comment by a statistician friend. She said on a lighter vein that she was beginning to doubt whether all she had been taught in the past was wrong or not , constantly hearing the wrong interpretations and applause for the same.

A few absurdities we often hear………

“If P= 0.05, it means 5% probability that, what we observed difference is due to chance”

“If the P value for the null hypothesis is 0.08, there is an 8 % probability that chance alone produced the association”

“If a test of the null hypothesis gave P = 0.01, the null hypothesis has only a 1 % chance of being true”

“There is 1 in 20 chance that it is due to chance”

“A significant test result (P less than 0.05) means that the test hypothesis is false or should be rejected”

“It is the probability of making a mistake”

“P-values are error rate”

“A significant p-value means that the null hypothesis is false.”

“P .05 means that the null hypothesis is false with a probability of 0.95%.”

The p-value is the probability that the null hypothesis is true.(prosecutor’s fallacy)

P > .05 is evidence in favor of null hypothesis.

“P-value is the probability of type I error”

“what is observed is due to chance”

p value is the probability that the results I got occurred by chance

“A nonsignificant difference (eg, P >.05) means there is no difference between groups”.

“P .05 means that we have observed data that would occur only 5% of the time under the null hypothesis.”

“P .05 means that if you reject the null hypothesis, the probability of a type I error is only 5%.”

P =.05 means There is only a 5% chance that this result is a “false positive.”

P =.05 means There is a 95% chance that the effect is true.

“If you have observed a significant finding, the probability that you have made a Type 1 error (a false positive) is 5%.”

“One minus the p-value is the probability that the effect will replicate when repeated.”

P > .05 indicates that the effect size is small.

All these are utter nonsenses

To continue………….

--

--