In the latest AIMed webinar that took place on March 26, 2019, Molly K. McCarthy, National Director, US Provider Industry and Chief Nursing Officer at Microsoft and John Frownfelter, Chief Medical Information Officer at Jvion, spoke about how to shape and set artificial intelligence (AI) goals within healthcare. The hour-long session was facilitated by Dr. Anthony Chang, AIMed Founder and Chairman, Chief Intelligence and Innovation Officer of Children’s Hospital of Orange County (CHOC).
AI presents a tremendous opportunity to impact patient lives and improve outcomes. But as with any new technology, there is risk involved. Hospitals may direct their AI resources to the wrong areas. They may misalign AI investments to the needs of the organization. They may invest in solutions that aren’t proven or don’t work. Frownfelter and McCarthy shared how to avoid these pitfalls, what to expect from AI within healthcare over the next five years, how to define and build a successful AI strategy that aligns to hospital goals, and how to operationalize an AI strategy effectively.
Key components that enable AI’s success
Frownfelter explained that for an organization to build a successful AI strategy, leadership should start by understanding where the organization is in its analytics maturity. They must look at whether there is executive sponsorship for an AI-related initiative and understand who is going to drive implementation and adoption at the highest level.
Also, it is crucial to look at existing resources within the organization. Things like analytics architecture and the accessibility of data are critical to AI regardless of the approach to building or buying a solution. Newer approaches that used cloud-based solutions can leapfrog the traditional limitations seen with analytics and should be kept in mind. That isn’t enough however — implementing AI begins with a problem that needs to be solved, rather than having a “solution that is looking for a problem.” Frownfelter stated that an organization might be in a disadvantaged position if they come forward and say, “we want to invest in AI, but we are not sure where.”
McCarthy agreed. She encouraged conversations between departments as they decide on an AI strategy. This dialogue should include competency building to ensure the right technical aptitude of individuals who are supporting and driving the success of the project within the organization. McCarthy also emphasized the need to change the way organizations frame an AI project. Thinking about AI and driving the success of these programs require a different approach to management and messaging.
McCarthy added that healthcare organizations should look outside of their four walls for best practices. “Not just (within the) organization… but potentially looking at your consumer and patient base, as well as… folks outside of healthcare, in retail or other industries where AI has been implemented, adopted and successful,” she said.
The speakers also touched on the cultural requirement to assimilate AI into healthcare. As Frownfelter highlighted, AI is both disruptive and threatening especially to physicians and nurses, who were taught and trained to reject anything they cannot see, touch, feel and understand. AI doesn’t easily fit into the current technology adoption framework. The validation approach to AI outputs requires a different perspective. Leadership has to understand and own what is needed and determine how to communicate success in an agile and meaningful way.
AI isn’t a silver bullet, it is a tool
Both speakers warned of the danger in underestimating the complexity of healthcare and building AI solutions that target it. Often, when organizations are trapped in the “build it or buy it” dilemma, they would choose the former, without realizing the work effort required to establish an AI solution.
The risk of misjudging work effort is rather prominent for organizations with robust teams of data scientists. They are brought into the false perception that they can create anything. “Most of the time, they can’t. Then there is a gap between what’s been promised and what’s been delivered, and the clinical community gets frustrated. So, creating some boundaries and focus for who does what is very important,” Frownfelter said. A useful exercise to illustrate this, would be to do a cost and time analysis for an internal team to develop a working predictive model, and then further, to operationalize and adopt that model.
Likewise, it is too simplistic to think that AI is going to do everything or that it has the potential in the near-term to replace clinicians. As Frownfelter suggested, we should think of AI as a “lab test,” something that physicians and nurses use to interpret what they are seeing and make more informed decisions that help to focus resources, clinical actions, and communications to ensure the right steps are taken on the right patients at the right time.
Ultimately, if an organization is willing to invest in their own AI solutions, they will have to be mindful of their target and the timeline to address it. The building process may be extended when AI is built internally. It may take years to develop something novel — a process that repeats during testing and implementation. Frownfelter noted that most of the time, organizations embark on building something on their own until they realize the associated effort and the risk. There is a place for data science teams within hospitals, he noted. It comes down to strategically incorporating existing solutions with internal capabilities to make the most out of AI investments.
A use case example
To illustrate the successful operationalization of an AI strategy, Frownfelter highlighted a large, integrated delivery network based in the Pacific Northwest. The organization is using Jvion’s technology to look into a challenging topic: discharge optimization. The goal is to reduce avoidable extended inpatient stays that are the result of preventable complications, adverse events, and avoidable delays in discharge.
The Jvion Machine’s outputs integrate into the provider’s Discharge Planning Integrated Model. The provider transmits real-time patient data to Jvion. This data is analyzed, and individual patient risk levels and interventions are delivered every 24-hours through the system’s Electronic Health Record (EHR). Patient risk and intervention information are integrated into existing physician and staff workflow.
Frownfelter explained that the Machine’s interventions supplement existing efforts to identify and communicate target discharge dates and support the timely discharge of hospital patients across care management, RNs, and hospitalists. As a result, the provider is ensuring the appropriate care progression, optimizing discharge activities, and providing timely and appropriate communication to families and patients.
What can’t AI do?
Dr. Chang concluded the session with a thought-provoking question. What do you think is not going to be good for AI? McCarthy’s focused on the human touch. “I think ultimately, at the end of the day, healthcare is about people and truly technology has its place, but you don’t want to lose that human connection,” she explained.
Frownfelter echoed a similar sentiment. He expressed that if he were to be a patient one day or in the near future, he would not look to a machine for empathy. Frownfelter explained that he wants a physician or a nurse at his bedside because “empathy will never be replaced by AI.”
This article was published in AIMed Magazine Vol 2, #2 — The Cardiology & Radiology issue, pages 40 to 43, debut May 2019. A digital copy is available here.