Redefining statistical significance: the statistical arguments
Richard D. Morey
715

Very interesting post, thank you!

However, there is one thing that confuses me: It seems as if what you call a Bayes factor (q / (1–q), where q = posterior probability P(X < 100), say) is not a Bayes factor in the usual definition as ratio of marginal likelihoods, or equivalently the (savage-dickey) ratio of posterior vs prior height at the evaluated point.

It is easy to see that they are not equivalent in general if we consider what happens in case of an improper flat prior:

The former “Bayes factor” (in favor of X > 100) would still be well defined, and since our prior is completely flat, would roughly be p / (1- p), where p is the one-sided p-value from the frequentist test in the right (from the data inferred; X < 100) direction. We could also make this BF “two-sided” by multipling by 2, but this is not relevant to my argument here. In your initial example, it would roughly be BF = 1 / 14 in the one-sided and BF = 1 / 7 in the two-sided case if we agree to just multiply by 2.

The Bayes factor for H0 in its usual definition as ratio of marginal likelihoods would be BF = infitinity, since for a completely flat prior the posterior at every point is infinitly higher than the prior at that point.

To put it differently, your BF comes from the numerator of Bayes’ theorem, while the standard BF comes from the denomiator (marginal likelihood).

Could it be that these two very different definitions of a Bayes factor are driving some of the differences between your standpoint and the standpoint in the RSS paper?

Apologies if you have already explained that somewhere in the post and I just overlooked it.

Like what you read? Give Paul Buerkner a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.