A chatbot using music to improve your emotions

Why and how we designed a chatbot that uses music to improve the emotional state of users

Cécile Grand
Empathic Labs
5 min readOct 15, 2020

--

Photo by Johnny McClung on Unsplash

Why a chatbot in therapy?

A chatbot used in therapy has many advantages, more people can benefit from treatment because it’s cheap and you don’t have to go anywhere, you can just stay in your bed and have a therapy whenever you feel sad, stressed, angry, etc. In addition, it is easy to use a chatbot, you don’t need special training for that, and it can also be used for simple entertainment. Another point is that some people may be afraid to trust someone for fear of being betrayed or ridiculed; for example the therapist. When it comes to talking to a conversational agent, these fears are absent.

There are already chatbots available for therapeutic purposes: Woebot which uses cognitive-behavioral therapy (CBT) or Sermo which proposes art-therapy.

Photo by Volodymyr Hryshchenko on Unsplash

Why use music to improve our emotions?

Music is a very appreciated activity, and everyone can at least listen to it. One of the reasons why we like to listen to music is that it provokes our emotions and can change them. This is due to a reciprocal reaction between the properties of music (melody, lyrics, instruments, rhythm, etc.) and the experiences of a person (memories, cultures, etc.), so the emotional interpretation of a song has a large inter-individual variance. But some elements specific to music (tempo, tone, sound intensity, etc.) will provoke the same emotions or emotional states regardless of the person. Furthermore, it has been proven several times that music has a beneficial impact on our emotions and emotional states.

Although there are chatbots using therapy and therapies using music, no known chatbot so far uses a chatbot offering music therapy. So, we created Bobby our music therapy chatbot.

Photo by Stefany Andrade on Unsplash

Creation of the dialogue

We based the dialogue between the conversational agent and the person on the three-phase theoretical structure of Hill (2004) which are exploration, insight, and action. We created our chatbot dialogue following the logic proposed by Denecke et al. (2020) with their chatbot SERMO, and selected music from the database created by Soleymani et al. (2013) containing a list of emotionally annotated music. We have therefore chosen music that has already been tested and empirically validated. This database contains 1000 different pieces of music that cannot be found in record labels, which controls part of the inter-individual variability of the emotional interpretation of a piece of music because it is more likely that individuals have never heard these pieces.

Structure of the dialogue

In order to begin the discussion, participants are asked to write “Salut” or “Bonjour”. The chatbot then begins by explaining what it consisted of and asking for informed consent from the participants to be able to collect their data. Once this consent is given, Bobby asks the participants about their mood, then what is causing that mood, what event is causing it. He then asks questions about thoughts, attitudes, and feelings related to the event and then proposes a SAM for participants to complete. A SAM is used to define what degree of valence and arousal an individual feels.

BOBBY SPEAKS FRENCH!! Voilà, learning English now!

The chatbot defines the emotion or emotional state and asks participants if they agree with it, it attributes to them. If not, they are given the opportunity to change it. Bobby then provides information about the emotion or emotional states in question, explains how it is useful, and asks participants if they want to strengthen or change it. Once he has gathered all the information, Bobby asks questions about each person’s musical sensibilities so that he can come up with the best piece. He then suggests the music(s) to listen to. As soon as the activity is over, participants are asked to complete another SAM and are given the opportunity to share their thoughts about the experience and whether it has made a difference to them.

What can we conclude and what are the next steps?

For the moment the chatbot is within his early days. Preliminary tests are conducted in order to see its effect on stabilizing as well as changing the emotional states of the user. The tests will be done on people of all ages to see if it has a general effect or could it be age-segregated. This first prototype will be the base of a research study, you can test the first prototype under this link:

The analysis and the results will be shared in a follow-up Medium article when over.

Music is an important therapy, as quoted by Sen. Harry Reid:

“Simply put, music can heal people.”

Thanks

  • To Christelle Rossier who programmed Bobby,
  • To Karl Daher who supervised all the work.

References

Bell, S., Wood, C., & Sarkar, A. (2019). Perceptions of chatbots in therapy. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–6). ACM. https://doi.org/10.1145/3290607.3313072

Denecke, K., Vaaheesan, S., & Arulnathan, A. (2020). A mental health chatbot for regulating emotions (SERMO) — concept and usability test. IEEE Transactions on Emerging Topics in Computing, 1‑12. https://doi.org/10.1109/TETC.2020.2974478

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot) : A randomized controlled trial. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785

Hill, C. E. (2004). Dream work in therapy : Facilitating exploration, insight, and action (p. xv, 305). American Psychological Association. https://doi.org/10.1037/10624-000

Panksepp, J., & Bernatzky, G. (2002). Emotional sounds and the brain : The neuro-affective foundations of musical appreciation. Behavioural Processes, 60(2), 133‑155. https://doi.org/10.1016/S0376-6357(02)00080-3

Quintin, E.-M. (2019). Music-evoked reward and emotion : Relative strengths and response to intervention of people with ASD. Frontiers in Neural Circuits, 13, 1–8. https://doi.org/10.3389/fncir.2019.00049

Schaefer, H.-E. (2017). Music-evoked emotions — current studies. Frontiers in Neuroscience, 11, 1–27. https://doi.org/10.3389/fnins.2017.00600

Sharma, B., Puri, H., & Rawat, D. (2018). Digital psychiatry — curbing depression using therapy chatbot and depression analysis. In 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT) (pp. 627‑631). IEEE. https://doi.org/10.1109/ICICCT.2018.8472986

Soleymani, M., Caro, M. N., Schmidt, E. M., Sha, C.-Y., & Yang, Y.-H. (2013). 1000 songs for emotional analysis of music. Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia — CrowdMM ’13, 1‑6. https://doi.org/10.1145/2506364.250636

--

--