A Turning Point for P-Values and Science/Policy?

On 7 March 2016, the American Statistical Association released its “Statement on Statistical Significance and P-Values.” As someone who works at the nexus of science and policy, as well as a stats practitioner, I welcomed this statement. My hope is that the over-reliance on p-values as gatekeepers for everything from publication, graduation, funding/future-funding, career advancement, and prestige will now begin to end (yes, p-values really do have sway over all these things).

A Call To Action:

  1. Read the ASA’s “Statement on Statistical Significance and P-Values.” and share it with you colleagues
  2. Begin using an alternative to the p-value, such as Bayesian methods, Bayes Factors, or something else appropriate for your situation
  3. If you are a professor or mentor, share the ASA’s Statement with your students, postdocs, fellows, mentees, etc, and encourage them to stop using p-values and start using appropriate alternatives

ASA’s Statement TL;DR:

The ASA’s statement has 6 main takeaways (quoted from the press release):

1. P-values can indicate how incompatible the data are with a specified statistical model.
2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
4. Proper inference requires full reporting and transparency.
5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

My 2¢:

As a Bayesian, I was especially delighted by this passage from the Statement:

In view of the prevalent misuses of and misconceptions concerning p-values, some statisticians prefer to supplement or even replace p-values with other approaches. These include methods that emphasize estimation over testing, such as confidence, credibility, or prediction intervals; Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors; and other approaches such as decision-theoretic modeling and false discovery rates. All these measures and approaches rely on further assumptions, but they may more directly address the size of an effect (and its associated uncertainty) or whether the hypothesis is correct.

The use of the Bayes Factors, although somewhat controversial among Bayesians, is an approach that most biologists I collaborate with understand quite quickly and appreciate. The idea of posterior probabilities and posterior odds tends to confuse them a bit (honestly, posterior odds tends to be more easily understood than posterior probabilities in my experience). But they like the idea of the Bayes Factor, and especially the idea that there are “established” guidelines for how much weight to give to a specific Bayes Factor.

I’m quite pleased with the ASA’s announcement. I suggest scientists read the Statement, and consulting with biostatisticians or other statistically trained scientists.

Originally posted on my other blog on 12 March 2016.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.