The Ambiguity of p-value

Pouria Salehi
Human Systems Data
Published in
2 min readMar 15, 2017

This week’s article, video and two posts (1 and 2) are very interesting to me. It is because previously, I thought about the key question the article asks: what if testing the null hypothesis is beyond the amount of p-value? For instance, for some reasons we got a small p-value while the null hypothesis is correct or we run into a large p-value but the null hypothesis is not true.

The video clearly expounds the concept of the p-value: the probability if the null hypothesis is true. The first article argues about this notion that traditional statistics are sometimes counterproductive for doing research in the human science. For example, this could be stemming from noisy data set, which in social science is normally measurement error or variation among people. Finally, the second article tries to shed more light on the crisis of replication in psychology by criticizing Fiske’s article about media and science. Fiske concludes in her article that comments should be constructive, private, and as light as possible. However, Andrew, the author of the second blog, disagrees and responses that in order to cut the disaster of replication, it should be in open discussion form, polite but public, extensive with full explanations and references.

Personally, I admire his notion about open discussion in media. However, it scares me a lot. When we talk about open discussion and social media, spontaneously, we incorporate voting, public opinion, quantity instead of quality, and finally a kind of democracy to a scientific topic. Then, how are we going to distinguish between qualified and non-qualified criticizers? How much can we rely on non-professional criticizers? What if the majority of their thoughts and feelings are biased by the media they usually watch, read or listen to? This can be specially a problem in human sciences. It is because having some personal experience about a particular issue in hand, even by having heard it on the radio, everyone thinks they know enough to comment about it. The problem is when they expect their opinions to be counted.

By the way, as a relevant hint, recently, I received a comment from my advisor about employing the term of “significant” in my paper. I found her comment totally new and absorbing. Here is what she mentioned: “A modern norm in social science statistics is to avoid using this term (= significant)”. She refers me to this link: “Do we really need the S-word?” In this post, the author, Megan D. Higgs, argues that the meaning of “significant” is not clear anymore, and instead, it is really ambiguous. As a solution, the author suggests to avoid using it and to replace it by other words that can lucidly transmit our point.

From the original post

--

--