Introduction to Bayesian Thinking: from Bayes theorem to Bayes networks
Felipe Sanchez
6945

After more thought and some example calculations, I have this further comment:

The More Unlikely The Result The Less Reliable The Test

Your formula assumes that the more unlikely the event, the more unreliable the test no matter how accurate the test has been proven to be.

For example, if there is a 1 in 10 chance of an event happening and the test is 99% reliable then according to the formula the chance that the result of the test is accurate is 91.7%.

The test’s accuracy has fallen from 99% to 91.7% simply because of the low probability of the result reported.

If there is a 1 in 100 chance of an event happening and the test is still 99% reliable then according to this formula the chance that the result of the test is accurate drops to only 50%

If there is a 1 in 1,000 chance of an event happening and the test is still 99% reliable then according to this formula the chance that the test is accurate drops to only 9%.

The assumption of the formula is that the less likely the result, the less accurate the test even though the actual accuracy of the test has not changed.

Another Example

If I put 99,999 black tokens and 1 white token in a lottery bin and a machine blindly picks one token, the odds of picking the black token are 1 in 100,000 or .00001.

If 1 time out of 1,000 the machine inaccurately identifies the token it has drawn, the machine’s accuracy in reporting the results is 99.9%.

According to this formula, however, the chance that the machine will incorrectly report that it has picked the black token is only 50–50.

Your formula says that under these circumstances the odds on a bet as to whether or not the reported black token was actually drawn should be even money.

I would suggest that every casino in Las Vegas would be absolutely thrilled to take an even-money bet that the machine didn’t really pick the black token.

Asymmetrical Results Destroy The Concept Of “Test Reliability”

On the other hand, according this formula while a machine’s report that it picked a black token is only 50% likely to be true, if the machine reports that it picked a white token it is 99.999998996% likely to be true.

These asymmetrical results render the concept of “test reliability” meaningless. Essentially, the formula claims that a test that yields an expected result is virtually 100% reliable while the very same test under the very same conditions that yields an unexpected result is only 50% reliable.

Is There Proof That The Formula Accurately Models The Real World?

What is the proof that this formula is accurate?

It would be relatively easy to run a computer program to model the 1 chance in 100,000 scenario 100,000,000 times and apply a randomized 1 in 1000 error rate when reporting the results.

Under random chance, the black token will come up about 1,000 times out of 100,000,000 and, at an error rate of 1 in 1000, I would expect to see the black token misreported as white once and correctly reported as black 999 times. Under your formula only 50% (500 reports) of the black token being drawn would be correct.

That relatively simple test would provide a good indication of the accuracy or inaccuracy of this formula.

I would expect that only .1% of the reports of a black token will be false, not 50% as the formula claims but the proof would be in the actual results.

Are there actual empirical test results that validate this formula?

Please show me the money.