NLP and Machine Learning in Health— Designing a Chatbot for PTSD Assessment

Overcoming data challenges, zooming in on the problem, and building a Machine Learning backend to add intelligence to the chatbot.

Petar King
Omdena
Published in
4 min readSep 4, 2019

--

While working on the Omdena PTSD challenge, as part of the company’s AI for Good initiative, it quickly became clear to us that there are many challenges unique to the field of medicine that make it harder for chatbots to be implemented and generate value. However, through the power of community collaboration, we identified the most promising direction with great results.

#1 Identifying the right problem to be solved

The scope of our chatbot depended on the ML and other challenge teams- after all, a chatbot is just a vehicle through which certain processes are sped-up and automated.

After thoughtful discussions and investigations of the available data, which turned out to be very sparse (due to openness, privacy, etc.), we agreed on focusing our efforts on assisting professionals in screening refugees, veterans and other groups at high risk of PTSD and assessing the likelihood of an medical assistance being needed.

The problem definition

Build a chatbot that will, through a conversation with people at risk, provide sufficient information for the Machine Learning team to make a PTSD risk assessment.

#2 Aligning on the requirements

In order for your solution to be viable and useful in real-world conditions, it had to satisfy a number of requirements that have posed a significant challenge in solving the problem,

Stability

  • Always keep in mind the target group of the product you’re building. In our case, it’s people who’ve potentially experienced severe trauma, maybe even in the recent past. Therefore, we found it necessary to be able to guarantee a high degree of stability of our solution in order not to cause any further frustration in our users, but also because we require approval from medical professionals.

Chattiness

  • With chattiness, I refer to the general ability of a chatbot to carry on and encourage a smooth and detailed conversation with a user. In our case, the data we used for the risk assessment classification were transcripts from therapy sessions with people diagnosed with PTSD. So it was not possible to use a simple Yes/No questionnaire, but prompt the user to write more detailed responses.

Intelligence

  • The main use of a chatbot is usually not just to carry on a conversation, but to intelligently infer certain intents, entities, etc. Due to many constraints we faced, it turned out that the most intelligence we could and needed to implement came in a form of handling possible problems that could arise throughout the chat and assisting the user if needed.

Challenges

The main challenges we faced arouse from the requirements above and the conflict between them.

Firstly, we had to walk a narrow line of ensuring the stability of a conversation by following a scripted flow-easily verifiable by medical professionals, but at the same time prompt the user to open up and write longer and more detailed answers.

While text generation using Machine Learning is an alluring trend in the industry, it could offer us no guarantee that the chatbot will proceed as expected, generate enough data and cause no confusion or harm.

With all that in mind, we choose to follow an existing screening questionnaire (CAPS 5) used by medical professionals that contains a lot of open-ended questions. This gives us the required stability, ensures the user is prompted to answer broadly and, in addition, makes the medical professionals who would review this solution already familiar with the approach used.

While this addressed the requirements 1 and 2 (stability and chattiness) we were still left with the issue of adding more value through the intelligence of the chatbot.

As the data our ML team expected was in transcript form, the output of the chatbot had to be the same, so adding additional entity or intent recognition could not be used to aid the final assessment.

More information about the ML backend we built can be found in this article by Natu Lauchande,

Our plan was a focus on aiding the user through the conversation by implementing understandable fallback intents, recognizing general statements that are indicative that the user needs immediate help and routing them to an available professional if available.

Conclusions

This has certainly been a very educational experience for all of us being involved in a difficult real-life problem of which importance has been greatly motivating.

We learned to adjust our goals and expectations in accordance to the niche’s limitations and its requirements and we learned to always keep the entirety of the solution in mind, not just the chatbot, and how it will end up being used by real people out there.

Finally, I want to spread a message to the entire field of medicine.

If we want to move forward and help millions of people worldwide by using AI for Good, we need to open up our data to the wider public. With that, there is little limit to what can be achieved in the decades to come.

If you want to join one of Omdena’s challenges and make a real-world impact, apply here.

If you want to receive updates on our AI Challenges, get expert interviews, and practical tips to boost your AI skills, subscribe to our monthly newsletter.

We are also on LinkedIn, Instagram, Facebook, and Twitter.

--

--