What’s more, Lucas, Gratch, King, & Morency (2014) showed that “participants who believed they were interacting with a computer reported lower fear of self-disclosure, lower impression management, displayed their sadness more intensely, and were rated by observers as more willing to disclose. These results suggest that automated [health coaches] can help overcome a significant barrier to obtaining truthful patient information.”
Interestingly, reading more recent work on Organismic Integration from Deci & Ryan’s new Handbook, helped me put Lucas (2014) into context.
The people in Lucas (2014) were almost all struggling to initiate behavior, not maintain it. There is a difference in the quality of motivation regulation during initiation and maintenance. People struggling to stay motivated long enough to initiate behavior are almost always dealing with conflicting approach/avoidance behaviors as autonomy and relatedness are in conflict. Therefore a fear of judgement would be paramount and a chatbot would be obvious.
People who are struggling to maintain behavior have congruent autonomy and relatedness needs, so fear of judgement is must lower and a desire to connect with other people who share their values and identity is much higher. Therefore a need to connect with other humans would be paramount and a chatbot would be a bad idea.