15 Tips to Create and Train a Brilliant Conversational AI

Lakshmi Prakash
Design and Development
8 min readSep 28, 2022

Conversational AI is trending in many industries and sectors now, starting from the typical, classic ones such as ITSD to innovative projects. There are many benefits that come with using conversational AI, as we’ve seen before. Are you working on one? Do you have a team of skilled people set up already or are you just starting to work on an idea?

Either way, here are 15 tips based on my experience that you can keep in mind while working on your conversational AI project.

15 Tips to Train Your Conversational AI

Decide how you want your conversational assistant to be: Do you want your bot to be — a simple bot, like a typical chatbot, or do you intend to let the user and the bot be able to connect with each other and for your assistant to be able to pick up all the emotional cues and to understand and respond intelligently. A chatbot is relatively a lot easier to create; an intelligent AI assistant would require a lot of work. The decision is yours.

Define your bot’s personality: A simple chatbot doesn’t require a “personality”. If all that your chatbot is going to do is to greet, offer a list of options to choose from, and would respond based on the option the user picks, and quickly escalate the call or close the conversation, it’s not really intelligent, but again, this could work for clients and customers where too often, it’s the same set of issues and for each one of those issues, the answer is going to be the same. But if you intend to create an intelligent assistant, you need to think about how you want your assistant to behave.

Design the ideal user interface: A clumsy user interface could be highly irritating. Even if the bot is trained to have brilliant conversations, a poorly designed UI could leave a very bad impression. Ideally, you should get the UI designer, the conversation designer, and the project manager or tech lead to have a discussion and consider all the different features you could use and then pick the most useful one.

Why involve a conversation designer in user interface design? Because the conversation designer would be able to tell you the different possibilities in the different use cases you have and which features would be required where, and that is not something a UI designer need be necessarily trained on.

Pick the right face, the right voice, and the right font(s): This is something that some clients tend to take pretty seriously. If this is a project that would involve daily conversations or frequent conversations and is something quite personal, I’d suggest that you add as much customization options as possible. If you can afford it, I’d suggest that you allow multiple options for the user to pick and choose from — while the personality and the machine learning models could all be kept similar, you might want to consider adding a male and a female option for the face and voice parts. You could let each user pick the gender, the skin color, and the font that they would want, while giving them a few options to choose from.

Let’s say, in case of someone battling with cancer, or for a recovering drug addict, conversations with a support assistant could be very personal. Let your user get the best out of their experience. This would help build trust and rapport.

Decide on the language(s) and dialect(s): Would your assistant be able to converse only in one language or in more than one language? Do you want your bot to speak British English or American English? You don’t want one conversation designer and writer doing their job in British English and another set of people doing their work in American English. Make these choices clear to the entire team in the beginning of the project so that there would be consistency.

How would your assistant respond to abusive language? Most people working on conversational AIs tend to skip this part because they don’t even think of this possibility, but whether you like it or not, too many users try to see how a bot would respond when they say something offensive or even perverted, yes. Sigh! This involves both your bot’s intelligence and your bot’s personality. One, the assistant must be able to recognize abusive language. Two, once abusive language is recognized, how do you want your bot to respond? To empathize and carry on the conversation, to ignore and pretend there was no abusive language used, to politely inform the user that abusive language will not be tolerated? To see that as a sign of growing frustration and offer help?

Do you want your assistant to be able to read users’ personality? This is an additional layer of intelligence and would require a lot of data analysis and the help of experts to help with assessments you could automate.

Train Your AI to understand not just phrases frequently used in your field but also commonly used phrases: This is another mistake people tend to make frequently. The technical department and the client-facing department in AI are usually most concerned with how well their AI can pick up the words, phrases, names, values, and language that would frequently be seen in different use cases. Since they are mostly preoccupied with the “use cases” (and that’s their job, that’s what they should be doing), you will need a team of language experts to train your assistant to understand other kinds of utterances, too. Again, what all would be in-scope and what all would be out-of-scope? These are discussions you need to have early on, and this can always change and you accommodate more in the future.

You don’t want an AI to be able to easily handle complicated medical health concerns but not being able to understand “tell me something else”, “this is boring”, “I don’t believe that”, “I am not convinced”, etc. One more reason why you’d always want someone highly skilled in language and communication to be involved in your AI training!

How would you ensure that your assistant understands context? This is a major measure of intelligence in a conversation. We humans naturally understand context, but training a bot to do that requires a lot of work. Again, this is where the experts come in. And this is one of the many reasons you should not let the software developers deal with the language part of the project as well. Think about it. Explain your ideas clearly to the team.

How would you deal with issues that happen in human-to-human virtual conversations? Virtual conversations can be tricky. For example, when the user asks for something and does not understand the assistant’s response, would you have videos, would you provide links, or if it is voice based, how would you explain? Make a list of all possible scenarios that could frequently happen, and try to give the best experience for the user using all the skills of your virtual assistant and your software/technology features.

Between collecting data for data analysis and giving the user privacy, where would you draw the line? Think with your team of experts and then decide. A good option would be letting the user make this choice for themselves. Also, be convincing when you need mandatory data. Where to draw the line wouldn’t be an easy decision to make.

Check several times to ensure that your conversational assistant does not discriminate: In this day and age, people, especially the youngsters are highly aware of their rights, equality, inclusion, and standing up for themselves. You’d not want to take this lightly if your planning on launching your product globally and then here several negative reviews saying that your product is discriminating. This is a choice, again, but choose wisely!

When your assistant asks questions, how much would you want to push? Is that piece of data mandatory at that point? Are there workaround if the user does not have the data or does not want to share that data? Forcing a user can leave a bad impression; the user might go looking for other options and might not want to interact with your bot anymore. But in some cases, some information could be key. Think and decide. Don’t let someone unaware of ethics and user experience make these decisions.

Ensure that your testers are knowledgeable and well-trained: There is no point in hiring a bunch of testers if they don’t even know what to look for. It would be bad if they can spot only technical bugs and that too, not all of them. There are many more things that can go wrong in conversational AI. Make sure that your assistant is grammatically correct. (Nobody would care when a caller or user makes grammatical mistakes, but silly grammatical mistakes from your AI would tell your audience that you’ve not put in enough work.) Similarly, other things to look for would be to check if the right information is shared, how reliable the information is, etc.

Set up a data analysis team to improvise: Many clients have no idea what they want or how they want improvement to happen when they get into conversational AI. They either don’t give enough data or don’t do any analysis on their side either. That’s understandable; AI and data analysis are still picking up pace, so not everyone is familiar with these. But you don’t want to be one of those, do you? Don’t you want your product to be upgraded and improvised? That’s what any good business would do — improving continuously is a must! And for that you’ll need data and data scientists do analyse the data to give you useful information. What kind of conversations do you deal with frequently? What features are missing? What frustrates users? Can the UI be made better?

Hope you found these tips to be useful!

Follow for more posts on AI, machine learning, data science and related topics.

--

--

Lakshmi Prakash
Design and Development

A conversation designer and writer interested in technology, mental health, gender equality, behavioral sciences, and more.