Bayes Factors for Psychologists

Bayes Factors provide a simple addition to existing analysis which adds extra depth to a study. With the addition of the new JASP statistics package it is simple and easy to calculate.

When I started studying statistics, as part of my undergraduate in psychology, it was with a book called Statistics without Maths for Psychology. Although I’ve always been strong at math I found this reassuring, maybe it was something to do with the fish on the cover. For many in my class it came as a bit of a shock, they had come to Psychology looking to help people and to understand more about the human mind. Instead they were sitting in front of a computer screen running SPSS while they tried to grasp the basics of probability. I suspect it is a common enough occurence with a minority of Psychology students comfortable with research and statistics. On that basis I’m writing this article to talk about Bayes factors in as simple a way as I can.

Lately p values have received increased criticism. They definitely serve their purpose but they can be missleading at times. The simple black and white of the .05 significance cutoff can lead to early conclusions especially when dealing with cases of low samples or effect size. The xkcd cartoon below is hyperbole but it makes the point well.

http://xkcd.com/1132/

The bayesian approach focuses on probabilities rather than clear cutoffs. The bayes part relates to the below theorem.

The P(A|B) part refers to the probability of event A happening if event B happens. P(A|B) refers to the probability of event B happening if event A has happened. P(A) is the probably of evening A happening and P(B) is the probability of event B happening. In the above XKCD comic the Frequentist approach is to see whether the likelihood of the dice roll coming up both 6 by chance is less than 5% (which it is) and rejects the null (the result was by chance). This simplistic interpretation is one of the reasons statisticians are talking about the important of statistical power and confidence intervals to add extra context.

The bayesian approach to work out the probability of whether the sun has exploded based on the machines result P(A|B) is to multiply the chance of the machine giving the result when the sun had exploded P(B|A) which is 35/36 multiplied by the chance of the sun exploding P(A) which is very low for the example lets call it 1/100M (which is much higher than reality!) then we divide it all by the chance of the chance of the machine giving the result P(B) which is 1/36.

Based on the above P(A|B) = (35/36 * 1/100M) / 1/36 = 35/100M

So the probability of the sun having gone super nova based on the machine is 35 times more likely than before but still a very small probability. The below video gives another quick overview of the theorem.

When considering research we want to start with a hypothesis, we then collect data, and based on that data we update our hypothesis. When using Bayes theorem in this context the we consider P(A) to be what we currently believe the probability of the hypothesis to be.P(B|A) is the probability of our data occuring if our hypothesis is true. P(A|B) is the probability of our hypothesis based on the data, this will be our updated probability for our hypothesis, in effect our answer. P(B) is the chance of the data we have collected, which since we’ve collected it is conveniently 1/1 removing it from the equation.

P(A|B) = P(A) * P(B|A)

Bayesian statisticans refer to P(A) as the prior and P(A|B) as the posterior. The prior is the probability of the hypothesis before we do the research and P(A|B) is the updated probability of the hypothesis after we integrate the research data.

If we take the initial hypothesis that the null hypothesis is not true (the alternate hypothesis) then the P(B|A) is also known as the Bayes Factor. This it the ratio we multiply the original assumption by to get the new probability. If we start with a prior of 1:1 we are effectively saying there are 50:50 odds at the start, the null and alternate hypotheses are just as likely. Then if our Bayes factor comes out as 3 we are effectively saying the odds of the alternate hypothesis being true when compared to the null hypothesis is 3:1 or three times as likely being true than the null.

Actually calculating the Bayes Factor is something that can be completed using the new JASP statistics package. The process itself is very simple with bayesian versions of t-tests, correlations and ANOVAs. Results can then be reported along with existing p values as an addition. When describing BFs one convention is to refer to values of 1 to 3 as evidence barely worth mentioning, 3 to 10 as substantial evidence, 10 to 30 as strong evidence, 30 to 100 as very strong evidence and anything over 100 as decisive evidence. If the BF is 1 then the null and the alternate are equally likely 1:1. If the BF is lower than 1 then the null is more likely. Typically the BF is written with a 10 in subscript below it indicating that the BF is in favour of the alternate hypothesis. If the result is lower than 1 then it can be reversed to show the strength of evidence for the null hypothesis and when done in this case it is written as BF followed by 01 in subscript. See the below images for examples.

Examples from the JASP program

Reasons to use Bayes Factors

  • You never reject or accept the null in error, you only report the likelihood of your hypothesis being true when compared to the null.
  • Bayes Factors are easy to understand to a lay person with basic betting knowledge
  • It shows the strength of the evidence for or against a hypothesis
  • They can be included along with normal frequentist analysis
  • With the addition of JASP they are easy to calculate

At this point I should note that I’m very new to this. I eagerly invite corrections on any of the info included in the above. One question that I haven’t been able to fully answer, for example, is whether there are assumptions of normality required in the data. I do strongly feel that this is an easy addition to any paper. The Evers & Lakens (2014) papers shows an excellent example of adding the BF in with existing analysis and it even goes on to note when an contraction between BF and p values occurs.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.