This week we welcome Dr. Robert Califf to our season finale of Trial by Data to discuss Clinical Trials Transformation Initiative (CTTI), his role in Project Baseline, and his thoughts on Watson.
Dr. Califf, M.D., is the Vice Chancellor for Clinical Research and Professor of Medicine at Duke University Medical Center. He also serves as an Advisor to Verily Life Sciences, a subsidiary of Alphabet Inc., and is an adjunct professor of medicine at Stanford University.
Dr. Califf served as Commissioner of the United States Food and Drug Administration (FDA) from February 2016 to January 2017, and as Deputy Commissioner of the FDA’s Office of Medical Products and Tobacco from January 2015 until his appointment as FDA Commissioner. He also served on the IOM Committee that recommended Medicare coverage of clinical trials, which Congress approved. He was the Founding Director of coordinating center for the Centers for Education & Research on Therapeutics™ (CERTs).
He is a member of the National Academy of Medicine (NAM) and Co-Founder of the Clinical Trials Transformation Initiative (CTTI).
You can find him on Twitter here.
Each episode, we pull out some of the key themes of the conversation for our listeners.
Here are the highlights from our conversation with Dr. Califf:
The role of the Clinical Trials Transformation Initiative (CTTI). CTTI was founded because people were frustrated by the mounting cost and complexity of clinical trials. After all, as cost increases, the number of clinical trials you can do goes down and less questions are able to be answered. CTTI sees the opportunity that the digital world presents to make data flow more easily and affordably through the clinical trial process. However, the digital world brings a new set of challenges, especially when it comes to working in a regulated environment. CTTI is finding a way to bring the culture of digital innovation into the clinical trial world, combining the creativity and rigor of each to create better outcomes for all.
What about Watson? Watson represents one of the most public faces of trying to apply algorithms and technology to decision making, and its application to healthcare has introduced incredible opportunity in addition to new technological challenges.
IBM’s efforts have uncovered the fact that clinical data are much richer than people once thought — meaning more opportunity for insights — but that you still need people to map it for it to work properly.
Early on, people expected to be able to apply deep learning algorithms to still very unstructured data, and that it would work from the outset. However, we quickly learned that this is not the case — you can’t do data analytics or prediction without doing data profiling to ensure that the data is clean and in order. We learned that this is technology that you can’t rush, and that requires strong groundwork. If you don’t build the system from a basic structurally sound level, then you can’t make up for it later.
- IBM’s Watson Supercomputer Recommended ‘Unsafe and Incorrect’ Cancer Treatments, Internal Documents Show
The implications of Project Baseline. There was a time when if you wanted to go somewhere, you would pull out your map. Now you talk to your car. That’s possible because every road has been mapped in a digital form, and the computing can handle all that information and use algorithms to give you decisions. While mapping the human condition is a little more complicated, we’re getting closer. By doing a study that begins with genes and goes through the biological markers, we can build a platform begins to understand health holistically. The idea is that is also includes data like EHR, claims data, and digital phenotyping to be truly comprehensive. Through this, we begin to build a larger and larger set of people data, which help researchers understand what normal conditions are. Project Baseline is accomplishing this and already has 1500 people enrolled.
The importance of data engineering. The data janitors are the unsung heros here. It should be an industry priority to get more people trained in how to extract data, verify it, understand the error, and then create the databases. This is a lesson Litmus has learned as well. When we began, we thought we were going to be doing machine learning and predictive analytics on real world data sets. While we still do that, 80% of our platform is data engineering that cleans up data and adds value along the way. By focusing heavily on the database and ETL side, we can avoid data lakes that are unfortunately common elsewhere and unwieldy to manage.
We truly believe that quality data is going to win the day. If you demonstrate that you have high quality data, that’s enticing for groups — from Google to the NIH — that want to do research and learn from it.
Linking it all back to health. If you look at states like North Carolina or Illinois, you see that the population health numbers are going the wrong direction. People are not becoming healthier, which means we need to devise better ways for patients to interact with the healthcare system. This means abolishing the siloed model of the academic health center and instead learning more about how patients are going to be maintaining their health in their home setting. After all, health outcomes are not determined by what happens in the clinic and hospital — they’re determined by what happens at home and work. Until recently had no way of measuring that, but with the advent of wearables and sensors, that’s changing.
Trial by Data, presented by Litmus Health, is a podcast exploring the data-driven technologies and strategies shaping the future of clinical trials. We cover the most pressing issues and questions facing researchers and clinicians today, in an ever-changing landscape. Listen in as we interview leaders and innovators in the field who are at the forefront of developing and using these data-driven approaches.