Here’s a dangerous meme that I keep running into:
“A doctor’s job is basically to look at symptoms, make a diagnosis, then prescribe treatment. Medicine is just a big decision tree — ripe for optimization and automation.”
Of course, it’s mostly my friends in tech saying these things.
It’ll come as no surprise that the medical community rejects this viewpoint. The prevailing view in healthcare is that tech will streamline some business processes and create some new sensors and devices, but never “disrupt” the industry as a whole. When I ask why, the answer usually comes down to “human factors.”
Let’s talk human factors, because I’m mostly with the doctors on this one: “medicine = diagnosis + treatment” is a limited, doomed-to-failure approach to healthcare. Techies who think this need some fresh perspective.
The Straw Man
Here’s the limited view of medicine that I’m arguing against:
1. Diagnosis = f(Symptoms)
2. Treatment = f(Diagnosis)
3. Success = f(Diagnosis, Treatment)
I can tell when someone subscribes to this model, because they always gravitate to the same playbook:
- Gather data about symptoms
- Use machine learning to minimize the error rate on all your f’s
This model is just right enough that I keep thinking about it, and so wrong that I grind my teeth every time I do.
There’s way too much here to cover in a single post, so I’m going to focus on the very top of the data funnel: Access to Symptoms.
Access to Symptoms
In the real world, “symptoms” don’t arrive on a silver platter served up by an EMR. They’re the product of a messy human process of observation and deduction.
1. Diagnosis = f(Available symptoms)
2. Available symptoms = f(Physical contact, trust, history, context)
- Many symptoms require physical contact with the patient. Doctors ask “Does it hurt when I touch here?” all the time. Can your AutoDoc 5000 do the same? If not, you’re at a serious disadvantage for diagnosis. After all, Clean data > More data > Better algorithms.
- Trust matters. I spent the last year leading development of algorithms/data systems at Aspire Health, a fast-growing provider of in-home nursing services. Despite the fact that our nurses were very busy, our lead physician always counseled them to “spend the first 10 minutes talking about the pictures on the mantle.” He knew that our ability to understand and treat was sharply constrained by relationships.
- It’s not always easy to discern what’s a symptom. Remember House? He’s the Sherlock Holmes of medicine. Like Sherlock, his superpower is the ability to “separate the relevant and important facts from the unimportant or accidental.” Many episodes in House turn on a deliberate search for contextual information: breaking into the patient’s apartment, or DNA tests from the patient’s parents. House is fiction, but perceptive fiction: context matters for diagnosis.
For these reasons and many others, “slap a decision tree on it” is the wrong model for health+tech. Yes, clinical decision-making needs improvement — in some areas, desperately — but tech’s ability to improve decision-making is gated on upstream access to symptoms.
Access to symptoms depends on physical contact, relationships, and broader context, not to mention data interoperability that doesn’t suck.
At this stage of the game, the real challenges for clinical decision support are data problems, UX problems, and service delivery problems, not pure machine learning problems.
More to come…