Why testing positive for a disease may not mean you are sick. Visualization of the Bayes Theorem and Conditional Probability.
The Bayes Theorem describes the probability of an event happening, taking into consideration the conditions that can affect it. For example: the number of HIV+ people are 1 in 250, the probability of testing positive if you are healthy is around 1%. Then the probability of being actually HIV+ if you test positive is only 30%. Seems counter-intuitive. The intuition is that there are so many more healthy people than sick that most of the positive results are false positives. But I’ll explain the Bayes Theorem with drawings.
Let’s use the following example, where 1 in 10 people are sick. We denote could write p(Sick | Total population) = the probability of being sick given that you study the whole population = 0.1. But in the case when you study the whole population you just write p(Sick).
To simplify the example we assume that we know which ones are sick and which ones are healthy, but in a real test you don’t know that information. Now we test everybody for the disease:
The number of positive results among the sick population (#(Positive | Sick) is 9. These people are the true positives, a value that it’s known for tests:
#(Positive | Sick) = 9
p(Positive | Sick) = 9/#(Sick) = 9/10 = True Positive Rate
Now the interesting question, what is the probability of being sick if you test positive? (in math: p(Sick | Positive))
In the figure above we have all the information, and therefore we can count the sick people among the positive results to say that the probability of being sick if you tested positive is 9/18 = 50%. However in real life you only know that 18/100 have tested positive. To know that 50% of those are false positives you can use the Bayes Theorem. But we will derive it here. All the information we know is:
To remind you, we want to calculate the people inside the square #(Sick | Positive).
The intuition is that if we know that there are 10 sick people (#(Sick)) and the true positive rate is 0.9, then #(Sick | Positive) = 9.
For the probability we divide among the studied population (the positive results) to get:
But we don’t know exactly how many people are sick, only the probability, so we can divide both parts of the fraction by the #(Total) and get probabilities of being sick and of testing positive.
And you have successfully derived the Bayes Theorem!
Let’s use now a real example, the HIV, 1 in 250 people have the virus, and the test is positive for 99% of the sick people and 1% of the healthy people.
We have that p(Positive) = (0.99*1 + 0.01*249)/250 = 0.01392.
Only 1 in 3 positives are real. That’s the reason why they test every positive several times.
Other uses of Bayes
Let’s use another example. Imagine you have a collection of books by five authors (Cervantes, King, Carrol, Brown and Follet). Each book is written majoritarily in a verb tense and you know the probability that a given author uses a specific verb tense.
You take a random book from the collection and it’s written in present. Which author has written it?
The probability that Cervantes has written it is:
Sometimes you don’t know p(Present), i.e. the number of books in present in your collection. However, given that p(Present) is constant for all the authors you can still compare among them:
This is called the Bayes factor, and values above 5 are considered substantial evidence, above 10 strong and above 100 decisive.