AI for Healthcare: The Promise and Challenges (Part III)

A Conversation with Dr. Xavier Amatriain, Co-Founder and CTO of Curai

Clarice Wang
12 min readJul 27, 2022

Dr. Amatriain and I discussed COVID-19, and more broadly, the promise and the challenges of AI for healthcare. We talked about wearable devices, digital twins, privacy and government regulation, etc. We diverged into Artificial General Intelligence, machine learning with experts in the loop, federated learning, out-of-band predictions, and many other topics. Dr. Amatriain offered advice for the younger generation who are interested in entering the field of AI.

This is Part III of a three-piece series. See Part I and Part II.

AI and COVID

I wanted to hear from Dr. Amatriain what obstacles AI faces in solving real-world healthcare problems and how AI can become more efficient and effective in dealing with outbreaks such as COVID. Amatriain believed that we should demystify AI and just think of it as “augmented intelligence” instead, as if it is a superpower for humans.

Q: I read recently that a lot of work has been done and hundreds of papers have been published about using AI to combat Covid, but none of them produced tools that would be good enough to use in a clinical setting. Is it because AI is still not ready for real-world healthcare problems, or is it because the process, for example, NDA approvals take a long time? What will the future look like? How can AI become more efficient and effective to deal with outbreaks such as Covid?

I think in situations like COVID we need to react really fast, and in many senses we have, so the fact that we have vaccines in such a short time means we’ve moved extremely fast and we had the approval of very experimental techniques, like mRNA for vaccines happened swiftly so I think that’s really really good.

However, in the case of AI and machine learning, we still don’t know exactly how to react and how to build things that react quickly to this kind of situation. I think it’s all about demystifying AI and algorithms and saying, ‘hey, this is not magic, it’s just mathematics,’ and also making sure that we have the right guardrails for when things go wrong because of course, things can go wrong. We need to be very careful if things go wrong in the context of medicine and they need to be tested and implemented in a way that if they go wrong, there’s somebody picking up from there. I think that’s why we have this notion of AI as an augmentation tool, and some people including myself, sometimes instead of using AI as an acronym for artificial intelligence, use it for augmented intelligence. If we start calling AI augmented intelligence, it’s just a tool that humans are going to use to be more intelligent, and in this case, doctors. But, it’s not fully replacing the doctors or fully replacing the humans. We still are going to need humans at some point, and they’re just going to be given superpowers, just like we get super powers through our cell phones right now. We don’t think our cell phone is magic, it’s just a tool that we’re using to do a lot more things that we weren’t able to do a few years ago. I think AI needs to be seen in this lens and that different tools that AI produces need to be regulated and need to be clear and need to be designed in a way that they are safe, and they build trust not only in the patient but in society as a whole.

By the way, the FDA does have an AI framework and approval process. In fact, I did write my blog about the FDA AI efforts, which I think is pretty interesting because it talks about a lot of the really important things: how to build AI models in healthcare and AI tools in healthcare that can bring in a lot of improvements to healthcare in general, but also need to be built with some strict sense of safety and quality to make sure that we’re doing the right thing. It’s tricky, I’ll just leave you with another analogy, if you think about, for example, self-driving cars, and people will tell you, ‘self-driving cars will save millions of lives because they’re going to be safer than the average driver.’ However, if a self-driving car kills someone, that’s gonna be really really bad and it’s going to be in the front page of the New York Times, and it’s already happened. So that’s really bad and we need to make sure that we have, because of that and particularly in healthcare and medicine, we need to have those guardrails and safety in mind. But I’m extremely bullish and sure that we’re going to be saving millions of lives by using AI in healthcare and medicine in general.

My conversation with Amatriain turned to what Curai had been working on to combat COVID. Amatriain told me Curai improved its machine learning models in order to diagnose COVID. Curai adapted its models without losing accuracy in its diagnosis of previous known diseases. In addition, Curai, in response to the situation where people are misinformed and in need of accurate answers, created a COVID FAQ library. With the help of language models and transfer learning, the FAQ library helps people who do not know how to accurately formulate their questions to get the right answer without the need of browsing through countless FAQ pages. While NLP and healthcare may seem to be a strange combination on the surface, Amatriain highlighted the importance of language and communication in medicine: “in healthcare decisions, the medicine gets executed through language.”

Q: You also talked about the importance of trust between a consumer and healthcare services, which is a great transition to our next question: What is Curai working on currently to combat COVID-19?

Interestingly, when COVID hit, we were in the middle of building our system and our service and our machine learning models. This was a completely new situation so we had to basically improvise and say, ‘okay, well this is a good test for everything we’re doing and for our approaches’.

The interesting thing about COVID or any such situation is that the data does not exist because everything is new and the models you had from before are not valid anymore because you need to relearn them somehow. In other words, for example, the symptoms that you had in the past for the flu are very similar to the ones that you have for COVID. How do you tell them apart? How do you even build that into your decision making? Even that is hard for doctors to do because they’re in the same situation, like ‘okay, nobody has published anything about COVID and how do we tell if this person has the flu or has COVID?’.

Early on there was not even very good testing that we could do to tell the two apart, so the way we approach this is by basically having an incremental approach to improving our models and we use an approach that combines data that is generated and is noisy but it keeps improving over time with what we call expert knowledge. So we can use the expertise of hundreds of years of medical research to generate some of our models and then we combine it with the latest data that is available and in this particular case for COVID, it was available day by day, week by week, as we were improving and generating more data. That was combined with the other expert knowledge that we had from years of research and we trained our models to become more and more efficient over time. We could do that interestingly, and we published some publications and papers around that, and adapted our models we got without losing accuracy in our previous diagnosis from previous diseases. That was interesting, finding one that we obviously designed our approach to be able to do, but we never thought we would take a situation so quickly, we would need to adapt and be flexible and we learned and we train our models this way.

During the early days of COVID, there was a lot of misinformation and a lot of need for people to access questions and answers around COVID. One of the interesting things about that is people didn’t even know how to formulate the questions they had and there were hundreds of places around that internet where there were questions and answers and FAQs around COVID and we integrated all of those and built an integrated natural language approach to answering those questions. We used a number of recent advances in natural language processing like language models and transfer learning to learn how users were asking those questions and to direct them to the best answer to their question in a way that they could formulate it in a natural language way, not by having to browse through hundreds of FAQs.

So that’s another example where we had Curai in the intersection of medical diagnosis and treatment modeling and also a lot of language understanding and language processing for healthcare because interestingly, I’m not a doctor but I work with a lot of doctors and physicians now. What you learn is that in a lot of the healthcare decisions, the medicine gets executed through language. There’s a question from the patient to the doctor, the patient telling you where it hurts, why it happens, and there’s a lot of conversational data around that so very early on, we jumped on the language aspect of medicine and also the diagnosis and treatment.

Q: You mentioned earlier that your machine learning model on COVID diagnoses was still accurate after introducing the COVID data. What do you think are the risks to adding new data such as COVID data to an already active machine learning model?

As I mentioned, there are a lot of approaches that don’t deal well with that and it is something that needs to be introduced in the modeling of your original model just to get started. There is this notion of out-of-band prediction in machine learning, which is what happens when you show your model something that is different, a different class on which the model has not been trained on? How does it react? Does it completely go off the rails and say something doesn’t make any sense? Or is it able to say, ‘hey, I don’t know, I’ve never seen this class, I don’t know what you’re talking about’, and at least tell you there’s a risk. I think that’s something really important in the case of healthcare.

Machine learning models need to be able to add new data and to be able to identify new trends in the data and new classes, but they also need to be able to react to things that haven’t seen and say, ‘hey, sorry, I could not make a decision here because this seems very different from what I’ve seen before.’ So it’s a combination of both things. In the case of COVID, it was tricky because it was a new thing the models had not seen but it was very similar to other things that existed, particularly, as I said before, things like the flu or other viruses. So we had to be careful on how to introduce these new data points with as much clarity as possible to make that distinction and to draw distinction between both classes.

One of the important things that maybe we’ll get into in the next question or in our conversation is the fact that, in our case, machine learning and AI is a component of our system, but we always have doctors in the loop, and that enables the model to say, ‘hey I don’t know enough’ or ‘this is what I think, but here you go’ and ‘doctor, you have to make the final call here’s what I think,’ ‘as the AI I’m giving you this information, now you take it from here.’ I think that’s very key and very important because we’re not at the point yet where we can just leave the AI on its own and say, ‘okay, you’re gonna make a good decision and you’re going to take it to the finish line.’ I can reason and build a lot of good decision making, in fact, in our data and our experiments the diagnosed accuracy is better than the average physician. However, even with that we would not leave the AI on its own to do its thing. We want it to be coupled with humans and with experts that are pairing with the AI and making sure that it’s doing the right thing.

Advice for the Younger Generation

I ended my interview asking Dr. Amatriain what advice he would give to the younger generation who are considering working on AI.

Q: Is this a good time for our generation to enter the field of machine learning? Or is the field already too crowded because of the AI hype? Everything seems to become computational, and I feel I might end up working on machine learning even if I major in psychology or social science. Is this a good thing?

Yes, I think it is a good thing because at the end of the day, AI and machine learning are tools that are going to be bringing, as I said, superpowers to … people not only in the medical field but in other fields like you mentioned, maybe psychology or maybe social sciences, and all of the other fields you can think about.

A few years ago, a lot of people thought that the only people that would be coding would be the engineers, and nowadays a lot of people that do all kinds of things code… So I think computing software has revolutionized a lot of different domains and different fields and machine learning and AI is just a next step and one that is going to be really important for all the fields, so I would recommend to anyone in your generation to be aware of it and also think about how it’s going to transform whatever they choose to do, whether it’s being a doctor to being a writer.

I don’t know if you all have seen some of the latest language models like GPT-3 that are actually pretty decent at just writing fiction, and you’re like, ‘how is this going to change the job of a fiction writer?’ Well, I’m not sure it’s gonna change next year, but I am sure it’s going to change in the next 10 years. So if you’re going to be a writer, you at least should be aware of how AI could impact what you’re doing in the next 10 years.

It’s great advice: Everything is computational. Our ideas, our consciousness, and our life: They are all computational. At some point in the future, machines will have enough capacity to simulate human brains. We must be aware of it and prepare for it.

I was glad that Amatriain mentioned GPT-3, and what AI might mean to a writer one day. I was intrigued, and just for fun, I fed the question I gave Dr. Amatriain to GPT-3.

Here is GPT-3’s answer to the question above.

GPT-3: I think machine learning is a very exciting field, and I think there are a lot of opportunities there. I think the key thing is to make sure you have a really good grounding in the fundamentals. I think that’s the most important thing. So, I think it’s a great time to be entering the field. I think it’s also a great time to be entering the field of psychology, because I think there’s a lot of interesting and important questions in psychology that are just beginning to be addressed by data science and machine learning. So, I think it’s a great time to be entering either field. I think it’s a great time to be entering any field, in fact.

That was surely a lot of babbling from a soulless algorithm! I tried a few more times, and another “good” response I got was:

GPT-3: I think it’s a great thing! We need more people working on big problems. I think the hype in AI is just that: hype. Many people think that AI is going to be the next big thing. I’m actually less optimistic about AI. I think that AI is going to be one of the many tools that we use to solve big problems. For example, let’s say I am trying to understand the brain. I might use AI to build a model of the brain, but I would also use MRI, genetic data, and many other tools. AI is not going to be the one magical tool that solves all of our problems.

All right, I guess AGI is not coming so fast after all. But I believe at some point, GPT-3 will read this interview, pick up Dr. Amatriain’s answer, and become just a little smarter next time someone asks the same question.

Conclusion

It’s a great pleasure talking with Dr. Amatrain. I strongly recommend the full interview on Spotify or Youtube. I would also like to thank Kevin Wang and Ashley Hong from the Harker Public Health Club for making this possible.

I lead a non-profit organization called Youth AI Lab. Besides all the wonderful talks we hosted, we are also organizing a workshop summit in fall 2021. Please visit our website and join our members’ list if you are interested in attending.

(This is Part III of a three-piece series. See Part I and Part II.)

--

--