I find it interesting how we shape our conversation to subtle emotional cues from our partners. If you have a complaint-department chatbot, you can assume negative sentiment. But a human can react with a lot of nuance to their particular conversation partner. Some people are very apologetic there’s a problem, others shout and demand immediate satisfaction. A good customer service rep will deal with each differently. The sentiment is negative in both cases, but there’s further nuance.
Right now we’re solving the most immediate problem (lack of chatbots) but at some time soon I think deeper understanding of psychology will come in to play as the best way to increase the value of our chatbots, where even a single conversational path can be rephrased based on the language use of the human.
It’s very exciting to be part of the early days in this field, isn’t it?