The AI-Empowered Doctor

Simon Hudson
Cloud&Co.
Published in
5 min readFeb 15, 2017

[This story was originally published on CloudRaker.com/magazine]

Employment is a sensitive subject. Tell any student that the degree they’re 3 years into earning is going to be worthless before they can payoff the student loans, and all of a sudden the dinner party has lost its color. Job automation is no joke, but it also isn’t a guarantee — maybe AI isn’t all it’s cracked up to be.

Perhaps more likely will be a compromise between AI taking over everything and it failing — AI functioning as another tool.

Benjamin Thelonious Fels is the CEO of macro-eyes, a company on the front lines of AI’s deployment in medicine. He and his small team are working out the complicated problems of making use of our scattered medical data. As the product manager, his work decides minute details of how the software works with doctors and patients. His daily decisions negotiate the relationship between human and machine. Our conversation below makes it clear that we have a long way to go before doctors are being replaced.

How are you convincing doctors to give up part of their jobs to a machine?

Ceding control to machines is a problem I first started working on when I was trading derivatives. I was building algorithms that would trade while I wasn’t there. Algorithms do things that we don’t understand. You need to trust that even if it’s going to make decisions which confuse you, on some basis, there’s sort of a structural logic which you agree with. And that comes down to the interface, in terms of how you see and understand those decisions and that logic.

We want the doctor to be able to observe decisions that a machine makes and understand why the machine identified certain patients or events as similar. Then, she can start to interact with higher levels of insight or abstraction. The doctor can, to some degree, jump to the future. She can see what is likely to happen to the patient.

Where is the “intelligence” in the technology you’re working on, versus it being just a very sophisticated search engine?

What we’re creating is an ability to find patients or medical events that are similar across hundreds of dimensions [blood type, medical history, etc.], and where any one or many of those dimensions could be missing. When we think deeply about searching for similarity, we realize that what we’re really thinking about is the common essence of the patients or events we’re looking at.

“Similarity”, at a mathematical and conceptual level, is a tricky problem to solve. One of the first things that we built was a way for the doctor to say, “Okay, this match you showed me is particularly good or bad”; and then the machine “learns” based upon the input of the physician.

It seems like the critical element to keeping trust is to keep the doctor feeling like they’re the decision maker.

Yes, you’re right — and they are the decision maker. I have no interest in building some kind of robot doctor, because I don’t think that’s actually useful for a number of reasons. I don’t even think the technology or the infrastructure is really there.

This is where I’m highly critical of AI. I want to emphasize that I’m building for an environment in which an expert interacts with an intelligent system; and that expert is bringing knowledge and insight that the system does not have. I’m not saying data; I’m saying information, which is the next step up.

We observe things and know things which are not captured in data; and that’s because we do not live in a world yet where every particle of reality is monitored. Thank God that’s not the case.

Physicians always talk about gait — the way a patient walks or how they show pain — this is one example of all-important information in the form of clues, which are hard to get from just looking at data, at least, the data we have now.

What matters is whether the data reflect the ground truth. Then, it’s a question of how to bring into our intelligent machine what we humans know, see or understand and that a machine does not. For me, that is the Holy Grail. A paintbrush plus a human using that paintbrush, in an expert fashion, does a lot.

What’s a big limit to AI’s progress in healthcare?

If you have bad data and you feed it to the world’s smartest machine, the machine is going to spit out something that is gibberish. That’s also where we have to get back to this interaction between human and machine. There are a bunch of companies doing interesting work around this. Even getting a machine to clean data by itself is a very tricky task, because the machine needs to understand what data is — what’s correct data, what’s incorrect data, what variables or values could there be and when is something an anomaly that makes sense or an anomaly that’s entered incorrectly.

The analogy I always give is that we have a customer whose refrigerator breaks, so he calls us on the phone and says, “My refrigerator just broke but you can’t come see it. Can you fix it?” It’s hard to fix a problem without being able to see it in depth, but that’s necessary because our customers often work with data that contains protected health information.

The really important point I want to make is that data has to reproduce reality. When you start to think about even very small instances of data not really reproducing reality, it’s altering reality instead; and if decisions are made based upon data that has affected reality, we stumble into some dangerous scenarios — extremely dangerous, medically speaking.

Would you say people are over-excited about AI?

My big concern is that there is so much hype about AI and machine learning that when that hype meets industrial customers, enterprise customers, and it doesn’t produce the results in three seconds, my worry is the customers will say, “Screw this. We thought you guys could predict the future in two hours and solve all our problems. Obviously, you can’t; and we aren’t interested anymore.”

Look back to 20, 25 years ago when there was the first boom in machine learning and AI. We basically had a drought because customers of technology said, “Yeah, we tried it, it didn’t work.” So when you overhype things and downplay the incredible difficulty of getting things to work, you make people think it’s plug & play.

Technology, even stuff we run on our computers, has a learning curve — it doesn’t work immediately, right out of the box.

////

Photos by Sarah Ouellet, Cover by Étienne St-Denis

--

--