P-value and Other Misinterpretations

Jennifer Williams
Human Systems Data
Published in
3 min readMar 14, 2017

The area of social psychology and other sciences have been under scrutiny with some of the fraudulent research that has come to light in the past few years. Researchers like Dutch professor, Diedrik Stapel committed data fabrication and was turned in by his students. Not before he published his research in over 55 reputable publications. When research is published in a reputable publication there tends to be someone who wants to try and replicate the findings.

In research, publications are usually looking for significant studies with a p-value of p≥0.05. Data can be reported statistically significant when researchers repeatedly perform comparisons on data or exclude data from their reporting’s. Gelman’s (2017) article stated that noise typically includes measurement error.

Some scientific journals like Basic and Applied Social Psychology decided it will no longer publish p-values (Siegfried, 2015). The effect size and descriptive statistics are being published for studies instead. The majority of research journals will publish only experiments with results as “statistically significant” which this has led to the rise in retractions in some publications due to misreported data.

The hypotheses which may be called the test hypothesis is seeking an effect or observed significance level of the p-value. Since the p-value is telling us that the test hypothesis is true this does not mean it was caused by probability. The p-value is the probability of getting the observed value. The smaller our p-value is, the greater chance the evidence is against our null hypothesis.

Sample size is crucial for determining our p-value. The larger the sample size the more precisely you are to estimates of the hypothesis, this does not mean it is true or not true. Confidence intervals which are 95% probability that the sample size lies within the interval — as shown by the below graph. As with p-values we need to be careful to elude misinterpreting confidence intervals.

Confidence Interval

It is important that research publications are reporting all the findings of a study and use descriptive statistics to explain all analyses that were used. Per Gelman (2016) social media has been necessary at pointing out errors. Susan Fiske, a professor of psychology at Princeton University says she is annoyed with social media and is able to publish articles that don’t have to go through peer review.

In conclusion, what is important is honesty and being able to admit to mistakes or insignificance of a study. When we manipulate findings of a study just to make them “statistically significant” for publishing purposes we are indeed hurting ourselves. The researcher must be the one who lives with that reality!

References:

Confidence interval graph — http://www.stat.yale.edu/Courses/1997-98/101/confint.htm

Gelman, (2016). Statistical modeling, causal inference and social science. andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed.

Siegfried, T. (2015). P value ban: small step for journal, great leap for science. Science News. https://www.sciencenews.org/blog/context/v-value-ban-small-step-journal-giant-leap-science.

--

--