Conversational AI — What Next?

Vishnu Priya Vangipuram
Whispering Wasps
Published in
3 min readApr 21, 2020
Image Source: https://hrdailyadvisor.blr.com/2019/05/30/conversational-ai-vs-recruiter-chatbots-whats-the-difference/

I have been actively working in the conversational AI space for quite some time now and when I see that a slew of advances and research in the field, with Conversational AI set to embark on a transformational journey, I feel motivated to see if I could add value by sharing my experiences and give a thought about what is waiting around the corner for it.

Outlining some of the challenges that, not just me, but any NLP/Conversational AI Engineer/Organization would face today, which should potentially become the tenets of what future holds for ChatBots and Virtual Assistants:

Intent Boundaries:

I faced this when I was working on an insurance bot.

A new chat bot development always starts with "known" set of intents with optimal training. But when it goes LIVE, it turns out that the customer converses with a bot outside of these "known" intent boundaries. This is not an "anomaly", but a valid spillover of the user needs.

Today, this aberration is handled through incremental training, but it still means that a few users do get impacted in the meanwhile.

Though pre-trained models help here a bit, as an AI Engineer, I am of the opinion that there is a strong need for a reliable, robust, real-time intent "discovery" mechanism.

Language Transitions:

Today’s conversations are complex at best. In India(from where I belong) specifically, what adds to the complexity is multilingual conversations.

Eg: When I was creating a bot that was aimed to understand all of English, Hindi and Hinglish as well, the AI modelling was sophisticated, to say the least.

Since this is a very common and compelling challenge to solve, I wish there could be more research in this realm.

Inadvertent Bias:

As an AI Engineer, I am responsible for training the conversational assistant and it is but natural that my personal perceptions influence the training data.

The deterring impact of this could be felt only when this reaches the real customer, which is late in the pipeline.

There is a need for a regulatory software arbitrator that warns and mitigates bias at the nascent training stage.

Context Switching:

Even as I pen down my thoughts, I face this interesting challenge of users boundlessly switching between "contexts" in a conversation with a bot.

Eg: A user may want to know his premium due but suddenly would also want to see if he can reach out to an insurance branch close by. He may then come back and ask for paying his dues online.

This context switching is very common in conversations and though there are solutions out there that solve this(eg., TED policy in Rasa), there is a long way to go to make it seamless for the end user, from a conversational AI perspective.

I have always wanted to take baby steps in solving the above and am still on the process of contributing and adding value in solving the above problems.

I hope I was able to convey and share my thoughts effectively, and I am excited and looking forward to witness the journey in the transformation of conversational AI.

“Next” is future reckoning. Hail bots! ☺

--

--