An AI-Driven Ride in Healthcare

Pillar
Pillar VC
Published in
7 min readMay 25, 2017

This piece is a guest post from Dan Housman, Director at Deloitte.

Yesterday, I attended Machine Learning in Healthcare: Industry Applications, hosted by Pillar in partnership with Merck, at the company’s Boston Research Lab. Deloitte was a sponsor (which hopefully helped the success of the meeting : ) The following are some highlights and some of the thoughts the group provoked for me:

First, this small 250-person summit that was privately organized without a major conference group was oversubscribed and exceeded expectations (over 100 people never made it past the waitlist). It shows that the interest and community around AI in healthcare data is growing like kudzu. As a first meeting––and I have been to plenty of first meetings––this was a very clear signal that the time is now for using new machine learning techniques for healthcare and pharma data.

Russ Wilcox, a Partner at Pillar and former CEO of E Ink , introduced the conference with an overview of the firm’s perspective on the impact of machine learning and AI on the consumer world. What stood out for me was a slide that showed the AI hierarchy of capabilities. The top two slots were reinforcement learning and generative adversarial networks. Seeing them on a standard slide shows that these capabilities have transitioned from being new things that nobody has heard of, to soon-to-be-staple in thinking about AI problems with data sets.

I didn’t see any examples of a GAN being used effectively in the conference, but it was hard not to have my mind wander to use cases like the huge benefit we could have if we used a GAN to make synthetic data sets from real healthcare records. If a GAN was done right, then the encoded model for generating fake records would be able to predict almost any health event by consuming all of the signal into it. I have no idea if such a project is feasible, but would love to give it a try as a research project. After all this quote from In the Plex about Google still rings in my head —

“The secret to compressing web pages into themes was prediction. If you can predict what will happen next you can compress the page. The better you get at predicting the page the better you understand it.”

Russ also shared a picture featuring attractive areas of the stack for startups to address with machine learning, which highlighted the opportunity in healthcare for having domain understanding and market focus.

John Brownstein Chief Innovation Officer at Boston Children’s Hospital followed. I have seen much of John’s work in the past, but was struck by the reality of how they are driving innovation in patient engagement spaces. With the work they have done with Amazon Alexa, they are able to use some of the engagement tools readily available through the platform. But they’re also starting to cross into what I often think of as a modern holy grail territory of taking input from patients to form models that can then connect into decision support. This can include patient recruitment, clinical trial recruitment, and patient education.

This last mile is a key area where work needs to be put in. In a subsequent panel session, the discussion turned towards the question of how open the EHR vendors will be towards offering the hooks needed to offer decision support. From what I could tell, these hooks are actually starting to form with the major EHR vendors, both embracing FHIR but also leaving a lot of decision support as an API to content providers that can put the decision tools back into the point of care. I can imagine this becoming an intense area of focus for all parties at this one interface point.

No conference about AI in healthcare would be remiss if the audience and panelists didn’t point out that the algorithms themselves are a far easier problem to solve than the cultures of data protection, commercial blockage of data sharing, and limited capacity for cleansing assets. At some point, a panel including Niall Brennan (CMS), Elliot Cohen (PillPack) and Iya Khalil (GNS Healthcare) faced a question from the audience about how to solve this problem. The answer hovered somewhere around fixing leadership by replacing leaders at institutions with folks who ‘get it’. I would hope we can also still work with existing leadership in health systems and pharma to do governance and change management that achieves the critical aim of liberating the data. But overall, it was clear from Khalil’s comments that the tools in place in healthcare still aren’t on par with those in other industries, and we have room to grow to get to the data we need to apply great algorithms in healthcare.

Tim Delisle, CEO of Datalogue, provided a perspective on the data preparation side, which is another way to pave the path for AI. Delisle shared how his company is using AI to detect mappings and conduct parsing normally done through painstaking ETL to prepare data for AI use.

There was also a clear signal from the summit that we need to think differently if we want to incorporate machine learning with success. Harvard Medical School’s Zak Kohane responded to questions about how to make a safe and effective policy around AI helping in medical decisions, with the rebuke that the system is already broken. He noted that we already have tests that have problems with accuracy, and in other areas of our lives––like the black box-like the algorithm of a self-driving car that holds passenger’s life in their hands––we are holding technologies to a lower standard than we are in medicine.

The conference included multiple interesting cases where predictors could be found in unexpected places. Kohane shared research that found claims data more predictive than genomic tests for parents who have had a single autistic child who were looking to understand the probability of a second child having a diagnosis of autism. I thought the story was moving, in that it points to the power of even simple data sets like claims data to find signals of interest. We need ways to either pose questions better/faster against those large data sets or figure out a way to have an AI do it at scale.

I also had the chance to connect with a researcher from a pharma company who is now doing research against zebrafish for behavioral analysis. The question raised is whether a video processed through a deep learning model like recurrent neural networks would provide a way to evaluate changes in behavior both for genetic mutations or treatments by targeted drugs to restore behavioral functionality. Overall, there was a theme that ‘image’ and ‘free text’ based problems show a greater degree of tractability with current data sets and algorithms.

PathAI’s CEO, Andy Beck, showed a demonstration of how his company is using machine learning to improve breast cancer diagnostics, and Harvard Medical School’s, Hugo Aerts, Scientific Advisor to Sphera, showed how lung cancer detection can be improved. An impressive moment occurred when Andy Beck highlighted an image, indicating areas where the AI had higlighted areas on which the pathologist should focus, and the AI had provided some decision support suggesting the content. It was clear that this AI + Pathologist model is the needed future in any type of medical decision that involves hunting for bits of information in large image fields. Hugo also put up a great slide showing the process for going from learning model to diagnostic that likely goes beyond just imaging based AI driven diagnostics.

Benevolent AI CEO, Jérôme Pesenti, who created and led the development of the Watson platform, shared a detailed view of the many steps in the drug discovery process where you can use AI to support acceleration. They had built these pieces themselves, and have been moving towards best of breed components. It’s clear that knowledge graphs generated from combinations of unstructured and structured data sets are going to be a key asset––both proprietary and open––in accelerating drug discovery.

I spent a bit of time at the BioIT World conference this week as well, and it’s clear that the barrier behind the data access barrier is the work that it takes to make a meaningful knowledge graph of scientific knowledge. Many groups will be building internal solutions to enable their data and content in ways similar to Benevolent AI, and it will be interesting to see how efficient the engines are. From what I can tell, groups like Benevolent AI have come to the conclusion that you can’t do this as a solo sport with technologists making the graph tools and practitioners like scientists executing research that uses them. In order to be successful the teams must become one. A great example was the AI driven drug hypothesis generation machine that Jérôme described, and then followed-up to explain how difficult it is to then determine whether a hypothesis is novel or not. As a result of this need to be integrated we will keep seeing in silico groups like Benevolent AI work directly in the early stage drug discovery businesses licensing products not software.

Overall it was a great day and I was sorry to leave before Len D’Avolio spoke about Cyft (one highlight above). As a final parting thought, the current environment fully echoes the idea from Andrew Ng that AI is the new electricity. The intersection of AI in Healthcare and its potential for innovation is at the heart of us rebuilding a very different new world. I am glad to be along for the ride, even if I am not sure what black box is helping me drive.

Show some ❤ for machine learning + AI below.

--

--

Pillar
Pillar

Written by Pillar

Venture capital doesn't have to be the dark side. Investing in unstoppable founders at @Algorand, @PillPack, @DesktopMetal, @PathAI & more. www.pillar.vc

No responses yet