AI for Healthcare: The Promise and Challenges (Part II)

A Conversation with Dr. Xavier Amatriain, Co-Founder and CTO of Curai

Clarice Wang
10 min readJul 27, 2022

Dr. Amatriain and I discussed COVID-19, and more broadly, the promise and the challenges of AI for healthcare. We talked about wearable devices, digital twins, privacy and government regulation, etc. We diverged into Artificial General Intelligence, machine learning with experts in the loop, federated learning, out-of-band predictions, and many other topics. Dr. Amatriain offered advice for the younger generation who are interested in entering the field of AI.

This is Part II of a three-piece series. See Part I and Part III.

Digital Twins for Healthcare

Digital twins are a fascinating idea. They are digital representations of complex systems such as airplanes, buildings, or even cities. In the medical context, digital twins are virtual representations of the human body and its organs where the effects of drugs and medicine can be studied.

Every person is different, and clinical tests are often insufficient to reveal a person’s overall health situation. Digital twins allow quicker and more economical measures of a person’s status and possible responses to medicines than is possible in the real-life setting. Thus, digital twins increase the feasibility of personalized or precision medicine. With digital twins, doctors are more likely to find the optimal treatments for each individual, and reduce the chances of people taking medication that doesn’t work for them.

While precision medicine and the idea of having a digital twin representation interest Amatriain, he recircles back to the issues of accessibility and affordability, emphasizing that “we run the risk of creating really good solutions for the people who can afford them and then leaving everyone else out.” Amatriain, while optimistic on the future of making data measurement in healthcare simpler, prioritizes accessibility.

Q: Medication is ineffective on a large population with situations such as Alzheimer’s and arthritis. With digital twins, we can specifically diagnose individuals and provide optimal treatments. What are your thoughts on this topic, and given the fact that a lot of people don’t have access to modern technology, is this something that would happen in the near future?

Yes, that’s an area that I am really interested in. It actually connects very much with my past experience in personalization at Netflix, and even before that when I was a researcher. There’s this notion of precision medicine of how we can get exactly the right treatment for the right person, how we can have an accurate representation of everyone and have your digital twin be your representation of you as a medical entity.

I do think that’s extremely interesting and it is something that we should push towards, but again, my biggest concern is how do we do that in a way that we don’t make the social differences for healthcare even worse. The last thing I would like for this to create is for people that have access to the digital twins and the people that have money are the ones that get good healthcare, and then everyone else doesn’t. I think that’s something that we need to really keep in mind and that’s why I push accessibility being a key aspect of everything we do because otherwise we run the risk of creating really good solutions for the people who can afford them and then leaving everyone else out, and that’s something that we’ve seen that socially doesn’t create the right incentives and we need to avoid that.

So, again, personalized decisions, data, and representations of people through their data and digital mediums I think are really really important and really interesting, but for each of those steps we wonder, how do we get them to everyone and not just to the few that can pay for them.

AI with Human-in-the-Loop

Despite the rapid advances of machine learning, AI still has a long way to go before it can conduct the type of reasoning and inference that are needed in many tasks. AI with human-in-the-loop could be a feasible strategy to enhance automation and simultaneously provide quality assurance.

“We consider AI another member of the team, rather than just a chatbot or just a tool that is used in some specific part of the process.”

At Curai, Amatriain is advocating AI with experts-in-the-loop. When asked about the potential of AI’s role in diagnosis and whether it’s limited to collecting basic patient information through a chatbot, Amatriain said that AI has a far more extensive set of abilities from building hypotheses to giving diagnoses that qualifies it as a team member.

Q: Curai is advocating an approach known as AI with experts-in-the-loop. Is the use of AI currently limited to talking with patients through a chatbot to collect basic information, or is AI already involved deeply in diagnosing diseases? What will the future look like?

No, the AI is not only a chatbot that is gathering information from the patient, we talk about our AI as another member of the medical team.

We have different members of the medical team: we have doctors, we have medical assistants, we have clinical associates, and we have the AI in the middle of all of that, and the AI sometimes will talk to the patient and extract information that they’re saying and ask questions and gather information about some of the symptoms and some of the findings around the patient.

However, it will also build its own hypothesis and differential diagnosis and then, it will help the doctors take it from there. It will provide information saying, ‘hey, I’ve talked to Clarice and I think that they might have this particular condition; but, there are 3 other things that they could have. Now, what do you want to do? Do you want to ask this question or do you want to ask these other questions?’. So they will also, for example, enhance the doctors by giving them not only potential diagnoses and conditions but also suggesting questions that could be interesting to ask as a next step. And, for example, if the doctor says, ‘you know what? I really want to rule out COVID in this case, what is a good question I could ask?’. Well the AI will say, ‘well to rule out COVID these are the 2 or 3 questions that I would suggest.’ That is again, acting as somebody who’s helping the doctor throughout the whole medical process all the way to treatment and the treatment plan and all of that. So we consider AI another member of the team, rather than just a chatbot or just a tool that is used in some specific part of the process.

Q: How can you ensure that patients are able to highlight the extent of their needs through a computer?

There are a few things. One is that patients don’t speak medical language. So we need to build AI that understands both the doctor language but also the patient language. We have an example that is very simple to understand, but it’s like a medical concept like abdominal pain. Could be that maybe the patient says, ‘I have tummy ache,’ and ‘tummy ache’ needs to be understood by the AI because if you are a doctor, you might use abdominal pain but you also should understand that ‘tummy ache’ or ‘tummy hurts’ are ways to say abdominal pain.

It’s kind of interesting because we’ve done user research with some of our patients or even just users that we get through the AI interactions and one of the things that we get as feedback is, ‘Wow! This AI really cares about me and I felt like it was really understanding me,’ and that’s really cool because that’s what we want.

I think this combination of experts-in-the-loop plus an AI that is built to understand patients and doctors and cares about getting to the right decision making is super powerful because I think there are a lot of great experiments. I don’t know if you have heard about some experiments in which it is now known that AI can beat humans playing chess in most of the cases. However, it is also true that AI plus humans beats the AI. So the combination of AI plus humans is better than the AI itself, and if you extrapolate that to the world of medicine and healthcare, it’s even more obvious. If that’s the case in a closed world like chess, in an open world like medicine and patient care where there’s a lot of information that might be hidden or not there, it’s pretty clear that even if the AI alone could be better than a doctor, the combination of AI plus doctor is always going to be better. The sum is always going to be better than the AI on its own.

Privacy, Regulation, and Federated Machine Learning

Another obstacle holding AI back from revolutionizing healthcare is the lack of access to medical records, which are personal data under safety regulations and privacy protection.

Take Michael Jordan’s article as an example. Jordan told the story about whether his unborn baby should undergo a risky procedure called amniocentesis to decide if it has Down syndrome. The uncertainty that surrounds the medical procedures, of which many are invasive, is the reason behind the regulation. While collecting, integrating, and analyzing humongous amounts of personal health data may help us accelerate medical breakthroughs, what are the implications of privacy?

“I think there’s always an interesting tension between privacy and the efficiency of the personalized solution that can be given to every individual, and that’s where it gets a little bit tricky.”

I asked Dr. Amatriain about the possibility of developing a mechanism that protects privacy and at the same time allows us to learn from the collective patterns from the data. Amatriain said Curai uses de-identified data in their machine learning models. De-identification is the process of removing all personal identities from the patient data. Besides de-identifying data, Amatriain noted that trust is what is needed to ease the tension between privacy protection and data crunching. The foundation of a good patient-provider relationship is the trust maintained between the two parties. As Amatriain puts it, “it all boils down to ensuring privacy and security and building trust with the patient that the data is only going to be used in their benefit and for them, not against them.”

Q: I believe by collecting, integrating, and analyzing a humongous amount of personal health data, we may accelerate medical breakthroughs and benefit humanity. But what about privacy? What does it take to create a mechanism that protects privacy but at the same time also allows us to learn from the collective patterns in the data?

A lot of people are putting a lot of thought into this question because privacy in general is a concern, let alone in the world of healthcare. A lot of the work we’re doing is using only what’s called the identified data. We make sure we remove any notion of identity from the data that we aggregate and then we build our machine learning models. But there are other approaches like federated learning which allows different data sets that are disjoint to be learning from each other in a way that differential privacy is protected.

I think there’s always an interesting tension between privacy and the efficiency of the personalized solution that can be given to every individual, and that’s where it gets a little bit tricky. Particularly in the context of medical care, you need to be able to identify the person as an individual in order to give them advice or even help them, and I think that’s an interesting tension that needs to be solved through a concept that is very related to privacy but slightly different, which is trust. We need to build trust between the patient and whoever is providing that service and is gathering that data so that they trust that that data is never going to be, for example, in our case, we never sell or even share or give that data away. However, we need to know your phone number, and that’s an interesting one because I wasn’t even sure why we need to know people’s phone numbers. It turns out that we’ve already saved some people’s lives because we have their phone numbers. For example, we’ve had some patients that connected with us that had some suicidal attempts and the only way we had of sending somebody to them was calling the police and saying “hey, this is the phone number of this person” and they were able to reach them. So being able to identify and connect with the person, in the case of healthcare and this situation, becomes important. Now we need to make you trust that the phone number that you’re giving us is only going to be used for life-and-death or extreme situations. And that is something that needs to be built into every system, and that’s a combination of very strict privacy policies and approaches to dealing with the data plus security.

I think another thing that people are worried about is like, ‘what if this data is leaked?’, ‘what if a hacker gets into your system and gets all my data?’. We need to make sure that we have high security approaches and strict privacy norms that give the patients trust and assure them that we’re doing the right thing with the data. There are a lot more things here that I could talk about and there are even laws and regulations around the protection of data that are really good and really interesting, but I think it all boils down to ensuring privacy and security and building trust with the patient that the data is only going to be used in their benefit and for them, not against them.

Dr. Amatriain and I also talked about federated learning (FL). Federated learning is a machine learning approach that protects personal information by decoupling user-provided data and machine learning model aggregation. While Amatriain did not say Curai was employing FL, this article on FL in digital health developments highlights the potential of FL in the clinical setting and closing the distance between patients and precision medicine.

(This is Part II of a three-piece series. See Part I and Part III.)

--

--