Image by Author

The New Kid on the Chatbot Platform Block

An Introduction to Cisco’s Mindmeld Conversational AI Platform

Chetana Didugu
Jan 26 · 5 min read

Chances are, if you are working on building conversational agents (a.k.a chatbots), you have explored various platforms such as RASA, Kore.ai or Microsoft’s QnA Maker. Each one is composed of very similar algorithmic components but different technological choices. However, each of them comes with different features, different levels of customisation, post-analytics, etc.

And now, there is a new kid on the block: Mindmeld!! It is based on Elasticsearch, is completely open source and is offers in-built ‘blueprints’ that correspond to various domain (food orderig, HR assistant, Home assistant, etc.) You can create your chatbots using one of these predefined blueprints or you can create your own blueprint by customising the template blueprint.

How Mindmeld works

Mindmeld works on two main functional components: a blueprint, and a queston-answerer. The question-answere is responsible for loading the required data from elasticsearch, while blueprint consists of the NLP tasks. In the blueprint, the input data is provided for training. For a user query, the domain (which domain a user query belongs to), intent, entity and role (sometimes an entity can have different interpretation based on where it appars. In this case the context needs to be identified. This context is known as ‘role’). In order to detect them, classifiers are used. Some default classifiers are provided by Mindmeld, but they also support more model options.

A client script takes user input and provides response. This is the interacting script in the library. It can be integrated with the Webex Teams platform currently, as mindmeld has recently been acquired by Cisco (Webex is a Cisco product).

Build your own Bot!

All you need to do for this is the following:

  1. Install the mindmeld library as follows:
!pip install mindmeld

Tip: There are many dependencies in the library that may clash with your existing Python setup. So it is advisable to create a new virtual environment for your bot.

2. Once you have installed it, you need to import the library into your working environment

import mindmeld as mm

3. Now on, in order to get anything done, you need to have an active elasticsearch session running. If you don’t have elasticsearch python client already installed, then install it like so:

!pip install elasticsearch

Keep in mind, you may face a compatibility issue if you istall an older version of elasticsearch (older than 6.7.0).
Now start your elasticsearch session:

form elasticsearch import Elasticsearch
es = Elasticsearch()
# This starts a session on localhost with port number 9200
# If you want to use a custom host and port, use the following command:
# es = Elasticsearch(host = customhost, port = customport)

If your session successfully starts, you should get the following output:

{   
"name" : "db0ea2282d8b",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "xL-8wGWERC2lV3hbCJoIeA",
"version" : { "number" : "7.0.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "b7e28a7",
"build_date" : "2019-04-05T22:55:32.697037Z", "build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0", "minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Now you are good to go!
4. Download a blueprint: For demo purposes I will be using a pre-defined blueprint — The food ordering blueprint:

mm.blueprint('food_ordering')

In your working directory, you should be able to find a folder titled food_ordering like so:

food-ordering blueprint folder in the working directory

Let us explore the folder to understand the different aspects of the blueprint:

components of food_ordering blueprint

Firstly, the blueprint consists of a config file which contains a parsers and the parameters of domain and intent classifiers. The models are trained with k-fold cross-validation. You can see the default parameters below:

configuration (config) file

You have your domain and entity data in the form of train and test data. The classifiers are run on them for training and validation purposes. Here is a snapshot of what the data look like:

Data folder:

Data folder has two files: menu_items (left) and restaurants (right)

Domain data:

train and test set for domains (above domain is “greeting”)
train and test set for domains (above domain is “building order”)

Entity data:

Entity data: gazetter (left) contains data of different items, mapping (right) contains mapping of each item in the gazetteer to a name in the right format

5. Build the model: Once you are done with customisation of the config file, you can run build your models on command line in this way:

build model command

Alternatively you can build it on your notebook in this way:

from mindmeld.components import NaturalLanguageProcessornlp = NaturalLanguageProcessor('food_ordering')nlp.build()

6. Converse: With the build command, your chatbot is now ready to deploy. You can use the following command to start interacting with your bot on command line as follows:

python -m food_ordering converse

Alternately, you can do it over your notebook in this way:

from mindmeld.components.dialogue import Conversationconv = Conversation(nlp=nlp, app_path='food_ordering')

You could try something like:

conv.say('how much for a veggie pizza')

And that’s it! You just built and deployed your own chatbot using mindmeld! It is completely built over elasticsearch. Your raw data and the processed objects are all stored in elasticsearch along with the classifier models. This way it is easier to debug your chatbot.

However, keep in mind that editing a pre-defined blueprint needs you to get your hands dirty, and I mean very dirty! But it is one of the easy ways to build your chatbot without having to start from scratch.

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…

Sign up for Analytics Vidhya News Bytes

By Analytics Vidhya

Latest news from Analytics Vidhya on our Hackathons and some of our best articles! Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Chetana Didugu

Written by

Data Scientist, Polyglot and a Tabibito | https://www.linkedin.com/in/kavitha-chetana-didugu/ | https://github.com/kavithacd

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Chetana Didugu

Written by

Data Scientist, Polyglot and a Tabibito | https://www.linkedin.com/in/kavitha-chetana-didugu/ | https://github.com/kavithacd

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store