Diagnostic Accuracy Scores: Physicians 84%, Computer Algorithms 51%

A recent study in JAMA Internal Medicine reported on the diagnostic accuracy of physicians vs. computer algorithms. The study compared the performance of one computer symptom checker app to the text based answers of 234 physicians, 90% of whom were general internists. You can read a summary here.

Twitter commentators had a lot to say about this study. There are many possible interpretations and inferences that can be reached. This post tries to capture some of the main themes:

  1. “Physicians Vastly Outperformed”
  2. Computer Algorithms are Gaining…
  3. …and Will Surpass MDs…but how soon?
  4. There’s a Lot of Room for Improvement
  5. The Devil is in the Details
  6. Better to Collaborate than Compete

1) “Physicians Vastly Outperformed ”

Here’s how the authors reported their own findings:

“Physicians vastly outperformed computer algorithms” … “physicians’ superior performance”

I’m picking up on the words “vastly” and “superior”. One might infer that the game is over, the doctors have won resoundingly, and that’s that.

2) Computer Algorithms are Gaining…

Others had a different POV.

3) …and Will Surpass MDs…but how soon?

4) There’s a Lot of Room for Improvement

The authors note that “despite physicians’ superior performance, they provided the incorrect diagnosis in about 15% of cases”.

5) The Devil is in the Details

6) Better to Collaborate than Compete

But not everyone is enamored with physician/computer collaboration.

A Few Personal Takes

Think of the JAMA study as a baseline. The author’s describe it as “what we believe to be the first direct comparison of diagnostic accuracy”. While this strikes me as a stretch, you should consider the study a start, not an “end-all”.

The topic is very important and worth putting high on your radar screen. I expect we’ll be seeing many more similar studies over the next few years, especially as new AI and decision support tools become available.

Be careful in generalizing the results. As the authors acknowledge, there are man methodological limitations to their study.

The study is a snapshot — it presents results about one static period of time. The more interesting comparisons will be to watch performance over time.

Photo Credit: jfcherry Flickr via Compfight cc