Balls and Bayesian Stats
First, a bit of backstory. The origins of Bayesian statistics stems from one thought experiment Bayes wrote that wasn’t even discovered until after his death. Bayes imagined himself with his back to a perfectly square, perfectly flat table. An assistant would throw a ball and it would land somewhere on the table. Then he would ask that they throw another ball and report where that ball landed relative to the first. Was it to the left or to the right? Was it closer or further from him? And this would be repeated many, many times. Bayes believed he could mark all these down and, like a game of Minesweeper, get more and more confident in where the ball was. This became the basis of Bayesian statistics. This thought experiment exemplifies the main aspect of Bayesian statistics that separates it from Frequentist statistics. The idea of using prior information to update and influence your interpretation of the data.
If I’m going to be honest before we go any further, Kruschke (2010) was not able to get this across to me very well. In fact, if you had trouble understanding it like I did, I recommend reading the first six pages or so of http://www.stata.com/manuals14/bayesintro.pdf, which provides a significantly better (in my opinion) explanation of Bayesian statistics.
Kruschke (2010), like a lot of our readings, is not a big fan of p values. And the explanation is sound. He notes that any given data set could have arisen from any number of experiments. And since a p value is derived from null hypotheses, there are in fact as many p values associated with a given experiment as there are ways to have performed it. This makes p values inherently unreliable as they’re basically subject to the scientist’s opinions. Given that p values predict how likely your data is extreme, assuming the null hypothesis is true, you in fact have no idea how likely it is that your null hypothesis is true given your data. This is where Bayesian statistics differs.
Bayesian statistics uses prior knowledge to suggest how likely it is that your data will occur assuming some prior knowledge. The best examples of this come from false positive tests. Kruschke (2010) provides and example of this with a drug addict. Since it’s presumed you read this, I will use an example from the Veritacium Youtube channel. This is because they provide on screen examples of the math involved which made it easier for me to understand.
Imagine you test positive for some terrible disease. A disease found in only about .1% of the population. The test has a 99% rate of properly identifying diseased individuals and only gives false positives 1% of the time. So what’s the likelihood you have the disease?
Most individuals will instinctively assume (and Frequentists will back up) that you have at least a 95% chance of being positive. But that’s wrong. It’s in fact way lower. The real answer is about 9%. Imagine 1000 people are tested. Based on the known population frequency, only one actually has the disease. But because the test puts out false positives 1% of the time, 11 people will test positive (10 false positives and 1 true positive). So really out of 11 people who test positive, only 1 actually has the disease. This is calculated using Baye’s Theorem which utilizes the prior knowledge that the your hypothesis is in the real world to influence your number crunching.
So examples are fun but what have we learned? Well the, “plain English” version is that Bayesian statistics uses prior knowledge to help influence current numbers. Instead of assuming that the null hypothesis is true (as Frequentists assume), it instead aims to determine if the null hypothesis is true given your data. Using prior data and probabilities to create a, “posterior distribution” (posterior meaning behind or after) of possible results. This treats science as a fluid machine, always updating and getting closer and closer to the, “true value”. Just like each ball thrown gets you closer and closer to knowing the location of that first ball.
Kruschke, J. K. (2010). What to believe: Bayesian methods for data analysis. Trends in cognitive sciences, 14(7), 293–300.
Introduction to bayesian analysis Retrieved from http://www.stata.com/manuals14/bayesintro.pdf.
Veritasium. The Bayesian Trap. Retrieved from https://www.youtube.com/watch?v=R13BD8qKeTg&t=338s.