Making A Smarter Bot

Sebastian Gambolati
Melontech
Published in
8 min readMar 25, 2019
Image from Nicolás Prior.

Many people consider chatbots to be the fourth application platform. At the beginning there were Desktop Applications where people interacted with heavy mainframes or databases on premises. Web 2.0 changed the development paradigm. Everything was turned into Web Application, so users were able to access applications without needing to install more than a browser. Following that, with the evolution of cell phones and tablets, people now have the ability to access their applications everywhere, bringing the Mobile Application paradigm, allowing people to access information from anywhere. Today we’re developing bots that can understand people in their own way, learning in its way.

Bots can automate tasks, perform complex calculations, or suggest what movie is best to watch based on our logs or advice based on previous interactions with similar users. Bots can do more than chat — they can expand their capabilities to understand our voice and feelings and act based on that.

The Microsoft Bot Framework provides just what you need to build and connect intelligent bots that naturally interact with your users on a website, app, Cortana, Microsoft Teams, Skype, Slack, Facebook Messenger, and more.

Some common classes Bot Framework provides are:

  • Dialog: A dialog is a similar concept like a Form. We can include it to reuse them across our code project or bots. For example, a dialog can ask to user their Full name, Date of Birth, Place where Lives, etc; validating the user entered data and when finished it can return an object with the result.
  • WaterFall Dialog: This is a dialog implementing steps. First step prompt the user to enter one information, second step keeps it and ask the next one and so on. When the user finishes all the steps the dialog will return to the calling point.
  • RichCards: As its name can tells us, we might use this classes to send to the user rich media. For example: Thumbnails, Hero, Carrousel, Video, Audio, etc.
  • Adaptive Cards: This is an external library used to represent complex information cards. For example, a receipt or a n-columns’ card.

Build Our Bot

We develop bots to help people in pre or post sale moments. For example, providing more accurate suggestions based on that person’s previous purchases or interests. Let’s suppose a Pet Shop that already has their website selling items such as pet supplies, food, toys, treats, and so on. It also has a section with news and advice articles as well as a section where a user can make an appointment to visit the veterinarian and check their pets studies status.

Planning

To get started on the right path, we might make a map for our bot with the same sections as the website and diagram how the user will navigate it. Let’s say that we start with a Welcome message showing the user a menu prompt to choice what he want to do:

  • Search products and categories. Place an order. Pay the order.
  • Search for an article.
  • Check pet studies status (3rd party web app).
  • Check doctor date (3rd party web service that requires authentication).
Photo by rawpixel on Unsplash

Developing

Each bullet item above is a Dialog for our Bot. We can use WaterFall dialogs to ensure our codes reusability and testability, and to guide users in a series of task prompting to enter some information or make a decision.

Product Dialog

This dialog might ask the user whether he wants to search by product or browse categories. To make the user experience is easily operated, we must present information pages with no more than 10 items per page in a carrousel. Each item can be an AdaptiveCard showing product information, photos, price, and a button to add to their cart.

Once the user wants to place an order, we can ask them to login or create an account using a SignInCard. The user should be redirected to a web page in their browser to enter their credentials for our commerce site, then they will go back to bot with an authentication token we can use to interact with our Commerce Site. We then store that token in the user state so the user doesn’t have to enter their credentials every time we need to interact with an external service.

Articles Dialog

Similar to the Product dialog, we might ask the user to search for a specific article or browse a category. We can present the articles using an HeroCard with a link to see full article in our site.

Check Pet Studies Status

Once again, we may redirect our user to a web app, but we can’t control when they come back to our bot. We can assume they won’t continue using our bot and end our dialog stack. However, when the user returns to our bot it will start again from main menu.

Appointment With The Veterinarian

Let’s suppose we already have an external service that allows us to interact with our veterinarian schedulers. We can query that 3rd party service to know when our user has an appointment and confirm or decline this appointment to avoid our doctors downtime. To do that, we must ask our users to login to that service. Either external site use a Single Sign On solution or a custom login mechanism, we have to lead the user to the service to authenticate.Once authenticated, we can retrieve all of their appointments and store locally to remind them when their next appointment is.

Testing

To test our bot while we are developing it, we won’t need to deploy to an external server. We can use Bot Framework Emulator to locally test and debug whenever necessary. The emulator can be used to test bots running either locally on our machine or connect to bots running remotely through a tunnel.

Publishing

When our bot is done we can deploy it in our favorite web hosting environment. After that, we should connect to each channel we want to be present for our users. We can register it in Facebook Message, Cortana, and Skype to start with a wide audience and continue adding any channel we feel we need to fill.

Johnny 5 by Lauren Manuel Garcia Carro

Make Our Bot Smarter

We can improve our user’s experience by interpreting their needs. One way to do this is by replacing our old menu flow with a simple question to prompt our user.

Natural Language Processing

Knowing what a user wants with only one sentence is not easy. There is an infinite variety of possible sentences and structures. The framework makes our life much easier by providing a rich set of language services called Language Understanding or LUIS. We can use LUIS.AI to add powerful natural language processing without needing to make major changes in our bot’s code.

We can configure our bots dialog to send user input (which are referred to as “utterances”) to LUIS endpoints. LUIS analyzes the text and returns the users intent. For example, if a user enters “I want to see dog leashes.” LUIS will return “Browse Products” as the user’s intent. It will also tell us the product category entity the user wants to see is “leash” and the animal entity is “dog”. The same will occur if the user enter some of the following utterances:

  • Want to see a leash for my little dog.
  • Do you have any leash? For dogs.
  • Show me some doggy leash.
  • What are the prices for dog leashes?

LUIS is constantly learning from what users are entering. If a new sentence comes and it doesn’t know how to map the utterance to an intent or an entity value, we can teach it to resolve this as we keep improving and training our bot to continue learning.

In the image below you can see how LUIS interprets our sample sentences, extracting the intent and the entities values.

Furthermore, we might have an intent for when a user wants to make an appointment with the doctor. LUIS has a excellent value recognizer for many common data types like datetime, places, and numbers. In this example it can use those recognizers to extract date and time values from the next sample sentences and return to our bot users intent and entity value:

  • Do you have any availability next Friday?
  • I need to see a veterinarian next week after lunch.
  • Is Dr. Stephens free tomorrow at 3.15?
  • I want to cancel my appointment on 3/15 at 4.30.
Image Source

Image Recognition

We can integrate different types of image recognition to get more information about what happens with our users pet. We can integrate our bot with Microsoft CustomVision services, allowing us to create a project with several images to train our model and publish it to use from our bot.

Given that veterinarians schedules are full, our user can’t make medical consultation as fast as his pet need. He can take a picture to his pet’s illness and we can try to identify it to hurry it right trait. To do this, we can ask our user to attach an image to our bot and internally send to CustomVision and identify any illness. Based on what was detected we can suggest articles to take special care of or, in critical cases, make an urgent appointment with the veterinarian.

Image from Nicolás Prior.

Conclusion

As you can see in this articles, bots can add value to our existing commerce and lead our products and services everywhere in a user friendly manner.

We can thing in simple solutions like a call center level 1 attendee telling the user standard procedures to solve the user issue, and when needed transfer the call to a human being to resolve more complex problem. Other example can be an appointment registration and remainder where we can speak to our bot assistance and it process our speech and transform into text. Moreover, we can design a bot to talk to the user when she left out shop cart checkout asking whether needs helps or any opinion, to try to convert an user into a sell.

Since a simple bot asking random trivias to a real complex one like suggesting new articles to read based on your previous read, passing to commerce ones we can ensure Bots can tackle almost every task.

Bots can do almost everything and can smoothly integrate with other services to join efforts to make life easier and our commerce grow. No matter what technology or platform they use, they are gaining commerce spaces where other applications were present in the past. Bots provide a rich user experience that users can afford.

--

--

Sebastian Gambolati
Melontech

Geek. Developer. Babu entrepreneur. Mixed Reality curious. Proud_Father_RC.