Talk Gen AI: Advancing Healthcare and Life Sciences with Generative AI

Arte Merritt
TalkGenAI
Published in
10 min readJun 24, 2024

Generative AI is truly a disruptive technology that is having a broad impact on our daily lives.

With Generative AI comes the promise to save lives and improve quality of life through advancing Healthcare and Life Science (HCLS) initiatives.

At Talk Gen AI, I had the opportunity to discuss how Generative AI is being leveraged in Healthcare and Life Sciences with a panel of industry experts, including:

Check out the recap below and watch the video to learn how Gen AI is impacting HCLS.

Generative AI use cases in HCLS

Generative AI is having a broad and significant impact across the Healthcare and Life Science industry. Healthcare professionals, enterprises, and educators are leveraging Gen AI to improve patient outcomes, drug discovery, diagnostics, therapeutics, clinical trials, educating professionals, and even patient communication.

It is truly game changing. As Shahram Seyedin-Noor of Civilization Ventures states, Gen AI has the potential to transform society and commerce in a way that is more analogous to the industrial revolution. It is a step function. Therapeutics companies are leveraging Gen AI to design drugs, cell therapies, and novel therapies for disorderly proteins that, heretofore, were not possible by human design.

Improving patient outcomes

A key area of focus in leveraging Gen AI is on improving patient outcomes.

As Daniel Young, PhD, of AstraZeneca explains, AstraZeneca is using Gen AI to identify healthcare disparities in patients, to identify which kind of patients are not receiving optimal care, in order to eventually improve that care. They integrate real world evidence data with social determinants of health to gain better insights and see where the disparities may exist.

Where Generative AI is really helpful is in trying to answer the “why” question. They first identify what is happening, and then build hypotheses on why, using Gen AI, and vet those with experts in the field — for instance through conversations directly with providers. They also work closely with regulators and stakeholders to improve education and mitigate disparities.

Dr. Chieh-Ju Chao, MD, of the Mayo Clinic also sees a lot of the applications of Generative AI in clinical scenarios being more focused on quality improvement. Saving physicians and nurses time in documenting or searching for data embedded in thousands of records is quite beneficial. The time savings lead to improved quality of care, which leads to improved patient outcomes.

Diagnostics

Generative AI is successfully being embraced in diagnostic strategies — in areas like radiology and pathology. These specialties are less patient involved, and more examining or analyzing images or the pathology and biopsy slides, as Dr. Chao explains.

With more patient-oriented specialties, like cardiovascular fields, there are more challenges to implementing AI given safety concerns and conservative approaches to patient care.

Dr. Chao’s team is exploring how to build a new generation of echocardiography interpreting systems — using Gen AI as a copilot to interpret data and generate reports. Physicians will be able to review the report and decide whether or not to approve or modify it. The system can then take the data, along with the patient’s medical history, and suggest a therapy plan, or perhaps even refer the patient to a specialist for further review.

Drug discovery and clinical trials

The impact of Generative AI on drug discovery is enormous. Startups like AlphaFold, a portfolio company of Seyedin-Noor’s Civilization Ventures, can predict almost any protein’s folding structure based on AI — something that previously appeared unachievable or would have taken thousands of years. Now with AI, nearly any molecular interaction can be more or less accurately predicted. This has profound implications on how drugs are designed. As Seyedin-Noor predicts, what medicinal chemists do today in “mixing potions” like an alchemist will be seen as primitive 20 years from now.

Generative AI is changing the approach to clinical trials as well.

Today, if you are filing for a small molecule to go into human patients for a phase-one safety trial, it is a lot of work. As Seyedin-Noor explains, you have to have a 28 day toxin-to-species study to show why this molecule is safe in mice or monkeys and how that translates to the body weight of humans.

Seyedin-Noor predicts this will change in 20 years. We will no longer be doing animal studies for an investigational new drug (IND) — it will all be generated. This means designing drugs with the foresight on what the toxicity, pharmacodynamics, or pharmacokinetics of a drug might look like. All of it will be encompassed in AI in a way that fundamentally transforms the industry.

When it comes to clinical trials, Generative AI can significantly improve the process of searching for, or identifying, optimal candidates. Today, without Gen AI, it is a quite slow, inefficient process. For example, as Dr. Chao explains, if a patient is identified as a potential candidate, there are numerous emails and communications with the principal investigator (PI) that take a lot of time. With Gen AI, he has seen studies in which the team saved 50% of the time compared to a conventional trial, to reach the targets they wanted in a patient cohort.

The impact on education

Gen AI is helping improve education and access to knowledge in the Healthcare and Life Sciences industry.

As Young explains, one of the challenges in improving patient outcomes, is keeping physicians abreast of the latest technologies and guidelines. The treatment landscape is continually evolving and complex. Physicians do not have the time to attend all the conferences or read the latest publications. This can be an impediment to optimizing patient outcomes.

At AstraZeneca, the team is using Generative AI to power education and personalized educational content through chatbot interfaces and other means, to help keep healthcare professionals informed.

Young sees the best performance, in terms of education, is in targeted therapeutic domains, rather than general purpose solutions. For example, if one is looking at lung cancer, they can tailor a solution around that including all the latest guidance, documentation, therapies, and diagnostic approaches — along with references and links to the underlying data to build confidence and trust in the educational materials.

Generative AI solutions can help augment and enhance one’s knowledge and experience significantly. As Dr. Chao explains, if one is considering a particular disease or case, today it is based on their own experience and what could be found in their educational materials or knowledgebase. A Gen AI system, however, could incorporate all the medical knowledge from literature and a hospital’s patient distributions, and be easily queried to get the relevant information.

Gen AI solutions can be used to help train and educate the next generation of healthcare professionals too. An example of this is in simulating patients for educational purposes. Today, clinical-ability tests are used to simulate a patient scenario for training medical students. With Gen AI, as Dr. Chao explains, a chatbot could be used to simulate the scenario and enable students to interact with the “patient.” The interaction can be more easily monitored to help in training and improving student performance.

Challenges with Gen AI in HCLS

Hallucinations

Hallucinations and inaccuracies with Generative AI are a big concern in the Healthcare and Life Science industry. They are not very tolerated. Companies in this space tend to be very conservative. Safety is the first priority overall.

At the Mayo Clinic, the team is fine-tuning large language models (LLMs) for radiology to reduce the rate of hallucinations. The question then becomes how to evaluate whether the AI generated reports are valid from a clinical perspective.

Doctors are not perfect either though, as Seyedin-Noor points out. They “hallucinate” as well. He relates it to Elon Musk’s sentiment towards autonomous vehicles — sure autonomous cars are going to make mistakes, but so do the average drivers, and as long as the mistakes are less than the drivers, society is better off.

Building trust

Building trust with Generative AI in Healthcare and Life Sciences is important. The concept of “trust” and how it is defined needs to be put in perspective though.

It is important to consider how trust is currently built or exists today in Healthcare and Life Sciences, as Dr. Chao indicates. For example with radiology and other clinical tests, MRIs or CT scans are used. Clinical doctors trust those images, even though they probably do not know how the machines work or understand the mechanisms behind them.

Trust does not have to be built on a comprehensive understanding of how the mechanisms in the machines work. The same could apply for AI models. One could trust the AI models work, without necessarily knowing all the details and fine-tuning procedures.

The trust with medical devices is based on prior clinical studies and randomized trials. Whereas with Gen AI models, currently folks are using human validation to see if the models are performing as good as humans — which is not the right approach.

Dr. Chao believes Gen AI models should go through a similar vetting process as medical devices to build trust and decide what to believe.

There already is a framework to judge how good the evidence is. At the lowest level are human expert opinions. Above those are clinical studies. Above those are randomized trials. And above those are summary articles and meta-analysis that summarizes the evidence from different randomized trials.

Gen AI models should be tested in clinical trials to determine if they are really good at helping patients or improving quality of reporting or patient care.

Regulation and regulatory frameworks are a solution to building trust for AI, just as they have been for everything else prior to AI, Seyedin-Noor adds. Regulation is needed to reinforce that AI solutions are not based on bad data in, or bad results out, but are rigorously trained and validated for their ultimate application.

It does not end at trials and regulation though. As Young points out, one has to track the actual performance. AstraZeneca uses a lot of real-world evidence data — highly curated electronic medical records (EMR) — and yet it is still a sliver of the patient population of the US. Clinical trials are a subset of the patient population. Once a drug hits the real world, the performance may not be the same. The same applies to Gen AI. Once the models are validated and put into production, the performance needs to be tracked.

Data Compliance

Data compliance is not unique to Generative AI when it comes to Healthcare and Life Sciences. Any data science or machine learning work needs to go through compliance.

AstraZeneca, for instance, deals with a lot of real medical data and real-world, evidence data that goes through a lot of compliance checks to make sure it is de-identified. There is a risk of patient re-identification when integrating multiple data sets that has to be kept in mind.

The difference with Generative AI, however, is it is more of a black box. It requires more scrutiny to understand how certain insights are being derived and how to validate and have trust in those insights.

Regulations

Regulations are way behind — to the point of holding us back with respect to where the technology is, as Seyedin-Noor states. From a regulatory perspective, Generative AI solutions can not yet be the first line diagnostic, decision maker.

The good news is that much of the use of Gen AI is behind the scenes in drug design and diagnostics. The product still goes through all the regulatory hurdles, as it should, but Gen AI already played a significant role in what the product looks like.

Given regulatory is currently behind, physicians are often taking responsibility for using AI, as Dr. Chao indicates. He would like to see the government, or other organizations, take more action to explore what regulations need to be implemented to use Generative AI technologies to their full potential.

At AstraZeneca, regulations like the European Union’s (EU) AI act are shaping the way they think about use cases in terms of transparency and risk assessments.

The future of Gen AI in HCLS

While Generative AI is already having a significant impact on the Healthcare and Life Science industry, each panelist was asked what they predict, or would like to see, in the future.

As the accuracy of Gen AI models improves, Dr. Chao envisions a day when we can fully trust and not be liable for decisions supported by AI systems. Right now we are asking these systems to be perfect, but we are still taking ultimate responsibility for any errors in decisions. If in the future, through regulations or other legal standpoint, an AI system can support a diagnosis and assure some of the liabilities, that will be a more equal and balanced human and AI interaction, he explains.

Dr. Chao also hopes to see more clinicians embracing Gen AI and trusting the solutions the same way they trust conventional medical devices. It may take another 5 or 10 years to have the next generation of medical students — the ones who are used to Gen AI in their lives — natively embrace all these new technologies.

Young sees a future with more personalized medicine. Every patient is unique and what is driving their disease can be very different. Each patient needs a tailored treatment plan.

Generative AI can help with this — in helping a physician make the right choice; in helping design clinical trials to validate these personalized approaches; and in working with diagnostics companies to help determine the right treatment plan for a given patient.

Young states it will take a concerted effort across all the different players in the healthcare space — including pharma, physicians, patients, and regulators.

Seyedin-Noor envisions even more profound changes — i.e. “species level changes.” While Gen AI will fundamentally change diagnostics, therapeutics, and interactions with physicians over the next 15 years, it is the next 15 years after — 30 years out — that he predicts we will see the most dramatic change.

There are both promises and perils of AI. We currently have DNA synthesis and DNA sequencing. In the future, one could potentially create a pathogenic virus in a lab that does X, Y, and Z simply by entering the parameters. At the same time, today 1.5% of births are achieved through IVF. Seyedin-Noor predicts that it will be more like 20–30% in the future. Instead of taking the chance on the intelligence of a child, parents could design it.

It is truly exciting to see what will come from Generative AI in Healthcare and Life Sciences in the future!

Watch the video

Arte Merritt is the founder of Reconify, an analytics and optimization platform for Generative AI. Previously, he led the Global Conversational AI partner initiative at AWS. He was the founder and CEO of the leading analytics platform for Conversational AI, leading the company to 20,000 customers, 90B messages processed, and multiple acquisition offers. He is a frequent author and speaker on Generative AI and Conversational AI. Arte is an MIT alum.

--

--

Arte Merritt
TalkGenAI

Conversational AI & Generative AI Entrepreneur; Founder of Reconify; Former Conversational AI partnerships at AWS; Former CEO/Co-founder Dashbot