Part 3: Unlocking the Future by Addressing the Challenges

Generative AI in life sciences

Collin Burdick
Slalom Daily Dose
5 min readOct 12, 2023

--

By Collin Burdick

Read part 1 and part 2 of this series.

As we delve into the integration of generative artificial intelligence (GenAI) and life sciences, we must address several complex challenges. These hurdles stretch across several areas, from a scarcity of cross-disciplinary professionals to aligning with regulatory standards such as those set forth by the Food and Drug Administration (FDA). To unlock the full potential of AI in life sciences, we must navigate this intricately woven tapestry of challenges. Let’s explore these challenges and their prospective solutions.

Expertise scarcity

One of the foremost challenges is the shortage of professionals proficient in both artificial intelligence/machine learning (AI/ML) and the vast landscape of biological expertise. We need experts fluent in both languages to optimally integrate GenAI in life sciences research and applications. The importance of this specific skill set cannot be overstated, as it underpins the creation of meaningful models and the generation of insightful data. Investing in interdisciplinary education is paramount for companies upskilling employees and individuals working toward future careers in life sciences. Without meaningful expertise at scale, we’ll either not meet the full potential of these models to change life sciences research or fall prey to the below pitfalls more often than we’d like for patient safety and efficacy.

Reproducibility

While GenAI can make convincing arguments based on specific data sets, the universality of these arguments is not guaranteed. Coupled with GenAI’s tendency to cite reproducible, non-reproducible, and hallucinated literature, this poses a substantial challenge to scientific reproducibility. Addressing this issue is crucial for maintaining scientific integrity. Future GenAI models may be designed with specific constraints and hallucination boundaries, but further research into these models is required to create such constraints while maintaining the seeming magic of a large language model (LLM). We may mitigate this challenge today by using GenAI as a convenience tool and then performing human validation thereafter — even with this, GenAI’s persuasive power may convince even the most skeptical scientist.

Proficiency in persuasion

GenAI’s advanced data interpretation capabilities can lead to compelling conclusions and predictions in research scenarios. However, without a robust validation process, such conclusions can be misleading. GenAI’s eloquent articulation can further amplify acceptance of these misleading results, leading to their misuse, particularly with patient-facing interfaces like marketing and sales. The co-creator and founder of a leading GenAI model commented:

“Just wait till you play with one of these models with its gloves off. The public models you use today are constrained to hallucinate less. When we’re training and testing internally, our team is often genuinely shocked and surprised by the outputs. As you might have seen, one tester was so shocked they broke contract to publicly profess they believed a model was actually alive.”

The potential for misconstrued scientific discoveries places a great responsibility on the research community to handle these systems with care. Additionally, human-in-the-loop should be ingrained in the validation process in the short term, with a long-term goal of employing more specific AI models to check today’s GenAI models.

GenAI’s interpretability and explainability

The inherent complexity of these models often impedes domain experts from fully trusting or understanding them. This becomes especially crucial when dealing with regulatory bodies, which necessitate explanation and their comprehension of how output recommendations are made or insights are generated to ensure patient safety. It’s unlikely we’ll be able to fully comprehend and explain model outputs, given their sheer complexity and size without hampering their effectivity. Model transparency and accuracy validation are still possible even without full explainability. AI model transparency demonstrated by communicating transparent use, training data inputs, and output accuracy is essential for public trust perception and regulatory approval. As Brendan O’Leary, a former FDA regulator with a decade leading digital health and AI guidance, remarks:

“Just as regulators and other stakeholders began to get comfortable evaluating opaque machine learning technologies with deterministic output — where the same inputs result in the same outputs each time — GenAI came along and broke the evaluation paradigm again. Despite this and other challenges, GenAI, if implemented appropriately, has substantial potential to benefit patients. Our task now is to start defining ‘appropriate implementation’ so we can realize that potential.”

Public perception of AI’s use in pharmaceutical companies

Life science companies already grapple with — often negative — public perception. GenAI and AI generally have the potential to exacerbate this challenge where well-meaning AI applications may lead to targeted, catastrophic outcomes. To counter the often negative perception, we need to engage in proactive, open dialogue, ensure operational transparency, and include the public in the AI development process. This is especially significant when considering the positive potential of GenAI in rare or orphan disease communities, where open communication and community involvement are critical and already well-established.

Ethical considerations

Ethical AI considerations are especially important to account for and regulate — firstly because of data privacy, security, ownership, and consent, but more critically, data biases. AI models can exacerbate biases already present in training data. Much of our current training data was pulled from historically biased experiments and trials either preying on underprivileged individuals or only capturing data from privileged classes. Without being fully aware of these biases and designing against them — by both model training and capturing modern data sets — we run the risk of furthering our historical failings.

Defined and effective government regulation

It’s essential for government entities to establish regulation that not only ensures ethical and safe use but also fosters innovation. Such regulation can serve to instill public confidence in GenAI and underscore its potential to revolutionize life sciences. We expect to learn more and share summarized learnings from our upcoming executive roundtable: The Future Intersection of Al/GenAl and Regulatory in Life Sciences.

In part one of this series, we explored GenAI’s potential in drug discovery, personalized medicine, diagnostics, and more. Part two defined the role of data and GenAI model development. Now, taking our challenges into account, we can navigate and make progress to fully integrate GenAI with life sciences. Doing so opens doors to its full potential, ushering in a new era of healthcare and scientific discovery.

Stay tuned to learn more about how AI and GenAI are tangibly changing the future of scientific discovery and amazing patient outcomes.

Slalom is a global consulting firm that helps people and organizations dream bigger, move faster, and build better tomorrows for all. Learn more and reach out today.

Interested in joining our next Bay Area industry roundtable? Find more information here.

--

--

Collin Burdick
Slalom Daily Dose

Global Managing Director @ Slalom Leading Life Sciences and Go-to-Market