Hippocratic AI CEO Munjal Shah On Separating Hype From Hope In AI For Health Care

Munjal Shah
5 min readFeb 22, 2024

--

Artificial intelligence ruled the headlines in 2023, with a massive upsurge ushered in by generative AI applications like ChatGPT. Entrepreneurs and tech enthusiasts seem to tout new use cases for this technology every day, with no industry unaffected by its “game-changing” potential. Of course, like any tech boom, there will inevitably be some overhyped projects, and not all claims of an AI-powered future will come to fruition.

For Munjal Shah, a serial entrepreneur who sold his previous AI-based companies to Google and Alibaba, the key to leveraging this nascent but powerful technology is to take a more grounded approach, using it to find workable solutions to tangible, present-day problems. His new startup, Hippocratic AI, stands out as a straightforward, practical application of large language models in health care. The company is pioneering a unique approach, concentrating on developing a safety-centric LLM designed exclusively for nondiagnostic health care applications such as patient navigation, dietary advice, medication reminders, and patient onboarding.

This strategy reflects a conscious effort to sidestep the complexities and risks associated with using AI as a diagnostic tool, aiming instead to enhance health care delivery in areas that are critical but less fraught with potential pitfalls​​. Munjal Shah’s hope is that Hippocratic AI can help address a health care staffing crisis both in the U.S. and globally.

“What are the things we don’t even do today that we would do? Would we call every patient two days after they start every medication just to check in and see if they’re having any weird side effects? Of course we would,” says Shah. “I think the way to make AI trustworthy is to pick the right applications that are safer, that have lower risk. I don’t want to solve diagnoses. I think it’s too high risk.”

A cornerstone of Hippocratic AI’s approach is the use of reinforcement learning with human feedback. This approach is based on the idea that health care professionals are integral to the training and refinement process of the company’s LLM.

Human health care workers train the model by correcting its responses based on their own experience and expertise. The model internalizes this feedback and takes it into account in its future responses. It’s an effort meant to ensure that Hippocratic AI offers top-quality care and resonates with the safety standards of licensed health care professionals.

“We are actually going to have 1,000 nurses interacting with our large language model as if they’re patients and it’s the chronic care nurse,” Munjal Shah explains. “These are real, licensed, registered nurses and they will basically do a blind test where they don’t know if they’re talking to a real nurse or an AI nurse. And only when they think it’s safe will we launch it.”

The LLM is also trained on thousands of evidence-based medical research articles, as well as textbooks, certification exams, and other specialized information. What emerges from this training, notes Shah, is an LLM that can pass the required tests that a human would need to pass in order to provide the type of care that the AI will set out to provide.

Hippocratic AI’s LLM has been tested on 114 certifications: 106 medical role-based examinations, three standard published benchmarks, and five novel bedside manner benchmarks. It has outperformed GPT-4 and other LLMs on the majority, including all major clinical exams.

Emphasizing the importance of bedside manner in health care interactions is another key focus for Hippocratic AI. Recognizing that effective communication in health care involves more than just conveying the accuracy of information, the company is focusing on integrating a more empathetic and compassionate communication style.

These qualities are widely acknowledged as vital in enhancing patient well-being and the overall quality of health care outcomes​​. A 2018 survey found that 85% of respondents valued compassion over cost when choosing a doctor. Unfortunately, compassion isn’t always easy to find. Another found that over 70% of patients experienced a lack of empathy from their doctor and felt rushed during their visit.

Shah points out that one of the fundamental insights behind Hippocratic AI was that, unlike human health care workers, LLMs don’t experience burnout and stress that can drain patience and efforts to be compassionate. They have unlimited time to interact with patients, and can be trained to do so in a considerate, sympathetic tone.

“Most bedside manner is just spending time,” he says. “We can teach [the LLM] to spend 30 minutes talking to you about whatever you want and then say, ‘Hey, I’m here to remind you to take your medications.’”

Chatbots may actually be better at communicating compassion than human doctors. A 2023 study published in The Journal of American Medical Association’s Internal Medicine found that chatbot responses were reported to exhibit 41% more empathy compared to those from physicians, and physicians were five times more likely to give responses that were rated as less than slightly empathetic. On the other hand, the chatbot’s responses were nine times more likely to be considered empathetic or very empathetic.

In line with this result, some doctors are already using ChatGPT to help them find the right words to communicate with patients. Hippocratic AI takes this insight one step further, allowing the AI to communicate directly with a patient in contexts in which there’s no risk of misdiagnosis.

There could be another upside to this approach: Patients may be more likely to be candid with AI about facts they’d be too embarrassed to tell a human health care provider.

“Imagine you’re unhoused, you don’t have enough food,” says Shah. “And it’s really embarrassing to admit you don’t have enough food. It’s truly a shame that most people feel, even though they shouldn’t feel shame. But they do and they won’t admit it, and maybe they’ll tell the LLM.”

Munjal Shah’s Hippocratic AI is showcasing the possibilities of how generative AI could transform health care. Yet the company’s approach emphasizes the need for a grounded outlook, where the excitement around AI’s potential is balanced by a focus on safety and practical application. It could be a leading example of how LLMs in health care can be part of solutions that are effective, ethical, and empathetic — and that align with the core values of patient-centered care.

Originally published at https://washingtonindependent.com.

--

--

Munjal Shah

Munjal Shah is an experienced investor in the intersection between tech and medicine, and is the CEO and Co-Founder of Hippocratic AI.