Building a Smart Preventive Health Coach at Singapore’s Health Hack 2024

How we built an empathetic chatbot to scale high-quality preventive health coaching

Marymount Labs
5 min readApr 4, 2024

Last month, our team’s smart health coaching assistant won top prize in Health Hack 2024, hosted by the National University of Singapore Yong Loo Lin School of Medicine.

Marymount Labs was awarded 1st Prize at the NUS Health Hack 2024

Why We Built a Smart Health Coach

Singapore’s preventive health landscape is far from ideal. Less than 1 in 3 adults are vaccinated annually, fewer than half of eligible adults are screened for common cancers, and most chronic disease patients have poorly controlled indicators.

After talking to GP doctors and Primary Care Network (PCN) coordinators, we found that manpower constraints were simply too tight to scale up high-quality preventive health education.

With this insight, we decided to leverage technology to convince people to take better ownership of their health. Specifically, to build an empathetic chatbot to encourage preventive health action, like vaccinations and cancer screening.

Building Empathy into Large Language Models

LLMs are great at mimicking human-like dialogues. Deploying LLMs for preventive health coaching seems straightforward, but using off-the-shelf solutions like ChatGPT or Claude is problematic. Generated replies seem superficial, out of context and unempathetic. They overlook an intricate web of beliefs and emotions that dictate a person’s health behaviour. For many, taking a vaccination is more than just knowing the benefits — some people may fear needles, others may find it inconvenient, still there will be those who buy into conspiracy theories.

But how can we programmatically teach a chat agent to be more ‘empathetic’?

Digging deep into medical and socio-psychological literature, we found a goldmine of frameworks that medical practitioners have been using to encourage health behaviour change. In particular, the Health Belief Model (HBM) identifies intrinsic factors that influence a person’s health decision making.

Health Belief Model Rosenstock et al. (1974) was used to anchor our chat agent

By building the HBM into our chatbot as a guidance system, we found that our dialogue planning agent could anticipate and respond sensitively to a person’s underlying health beliefs. Doing so facilitates the chatbot to nudge people towards healthier decisions by gradually reinforcing positive beliefs and addressing negative beliefs.

Developing the Architecture

Scoring Agent

The backbone of our chatbot is a belief monitoring system grounded in the HBM. It dynamically updates its understanding of the person’s health beliefs. These changes are captured on our admin dashboard, thus providing healthcare professionals with an intuitive understanding of a patient’s progress or resistance from a psychological perspective. This allows our chatbot to deliver personalised, targeted health advice.

Our chatbot comes with an admin dashboard that allows clinicians to track the health beliefs of patients

Dialogue Agent

Based on the chatbot’s understanding of the person’s health beliefs, it adaptively optimises its persuasion strategies to nudge the person towards taking preventive health action.

Early results show that our chatbot is capable of developing empathetic conversations

Retrieval Augmented Generation (RAG)

To ensure the advice is contextually relevant, we incorporated RAG techniques to limit generated replies within the context of Singapore’s healthcare system.

Ethical Guardrails

By implementing stringent guardrails within system prompts, we ensure that the chatbot operates within the boundaries of patient autonomy and privacy, fostering an environment of trust and safety.

Going the Next Mile

As we look forward, we are excited about the potential of our chatbot, especially its use cases within preventive health coaching. Some enhancements we are working on include:

  • Probabilistic Scoring of Beliefs: We plan to evolve the scoring from an ordinal to a probabilistic model. This approach will allow for a more detailed and nuanced understanding of user beliefs, capturing a full spectrum of attitudes towards health behaviours.
  • Persuasiveness Evaluation: We also plan to empirically evaluate the chatbot’s effectiveness in achieving preventive health objectives compared to more conventional techniques and physical nurse education sessions.

At scale, our chatbot could be used by thousands of people allowing us to map an intricate understanding of how people respond to different health messaging. Over time, our model can learn how to craft the most compelling health messages depending on the person’s health and social profile.

If you are interested in piloting our chatbot at your healthcare / care settings or just want to find out more, do reach out to us at helpdesk@marymountlabs.com!

Jessie is an incoming freshman who is interested in learning more about Machine Learning and Data Science. She is currently a Research Intern with Marymount Labs, developing research into deploying LLMs for preventive care.

References

Gao, Y. (2023, December 18). Retrieval-Augmented Generation for Large Language Models: A survey. arXiv.org. https://arxiv.org/abs/2312.10997

Hu, Z., Feng, Y., Deng, Y., Li, Z., Ng, S., Luu, A. T., & Hooi, B. (2023). Enhancing large language model induced Task-Oriented dialogue systems through Look-Forward motivated goals. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2309.08949

Jang, Y., Lee, J., & Kim, K.-E. (2020). Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7994–8001. https://doi.org/10.1609/aaai.v34i05.6308

Lau, J., Lim, T.-Z., Jianlin Wong, G., & Tan, K.-K. (2020). The health belief model and colorectal cancer screening in the general population: A systematic review. Preventive Medicine Reports, 20, 101223. https://doi.org/10.1016/j.pmedr.2020.101223

Reddy, S. (2023). Evaluating large language models for use in healthcare: A framework for translational value assessment. Informatics in Medicine Unlocked, 41, 101304. https://doi.org/10.1016/j.imu.2023.101304

Rosenstock, I. M. (1974). The Health Belief Model and Preventive Health Behavior. Health Education Monographs, 2(4), 354–386. http://www.jstor.org/stable/45240623

X, Y., Chen, M., & Yu, Z. (2023). Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning. ACL Anthology. https://doi.org/10.18653/v1/2023.emnlp-main.439

--

--

Marymount Labs

Building the digital infrastructure for preventive healthcare