How to optimize your operations with AI & NLP?

Dominika Basaj
Tooploox AI

--

Computer vision is a darling of the machine learning community thanks to its many exciting applications. Autonomous cars that recognize what is ahead of themselves, checkout counters that immediately see what you have in your shopping cart or even deep fakes that swap out faces in images and videos — these applications capture people’s imagination nowadays.

The text obviously seems much less impressive due to the lack of the visual layer. However, thanks to recent advances in the NLP field it can be very useful, especially if you want to reduce the operational costs of your enterprise.

So, do you want to take advantage of AI & NLP in order to speed up your day-to-day operations? Let me show you how you could do it!

A chatbot is not always the ultimate solution.

No doubt, that going through piles of documents manually, even in digital form is tedious and time-consuming. It happened to me a lot that I was looking for a piece of particular information in an online doc, but ctrl+f was not helpful enough for an optimized search. In heavily regulated industries like banking or insurance sectors, clients, as well as employees, must get familiar with tons of rules and regulations. Sure, FAQs somehow alleviate the issue, but it is not an ultimate solution. Some companies invest in chatbots that are aimed at being at the forefront and serving with advice. Still, the development of a well-working chatbot is a huge investment and at this point, you cannot expect the chatbot to actually… chat with people. Limited usability of chatbots makes it an expensive and often disappointing investment, which leads to unsatisfactory user experience.

However, if you need a tool that interacts with humans, a golden mean may be a system that searches for an answer in a span of the longer text. The so-called ‘question answering’ (Q&A) systems are very powerful and some of them even surpassed human’s ability to answer questions.

Source: https://rajpurkar.github.io/SQuAD-explorer/

Access to information can be facilitated thanks to AI-powered solutions.

Q&A systems are a hot AI research topic, but they are very useful in a more business-oriented environment, especially if:

  1. you have pages of rules&regulations that are impossible to learn by heart, but you must often reference them to solve issues of your clients.
  2. you compiled a comprehensive knowledge base, although it is cumbersome to use because, in reality, it consists of various text files.
  3. you want to have a comprehensive client support center and answer questions without investing in time&money consuming chatbot.

Let’s imagine a case when a certain company wants to facilitate the process of retrieving information for its clients.

The main pain for the company is that it has various sources of knowledge. The search query is not effective, the webpage of the company has a lot of very detailed information, in addition, some information is aggregated in uploaded PDFs. Clients do not bother to look for an answer which is not easily accessible and as a result, all of that burdens the customer help center which is overwhelmed with fairly easy questions. We may also even hypothesize that constricted access to information hampers the conversion rate of the new clients and satisfaction level drastically drops due to the long helpdesk. That is where the question answering system comes into play.

Firstly, we know that a real bottleneck in most developments of machine learning applications is a lack of data. Fortunately, in this case, we have a lot of open-source datasets that can serve as a basis for pretraining the question answering model, meaning that we can prepare the first version of the model on more general data and retrain it on a specific domain-related dataset once we collect a dedicated sample. Although it still requires us to collect some data, it is much less demanding and very effective. Most recent advances in NLP are based on the success of pretraining language models like BERT or ELMo. This kind of ‘transfer learning’ as we call it, is extremely successful in most algorithms. It means that the model learns some general rules about language that are encoded in embedding space. The embedding space is simply a set of weights learned by the network, which represents words.

Below you can see the result of such a basic Q&A system where we imitate a real-life situation of asking about the price of a specific bank product — in this case, a debit card.

Source: https://demo.allennlp.org/reading-comprehension/

Actually, you could deploy exactly this model as an MVP, but if you want to improve the performance, you should retrain it on your data. The collected sample data from your domain should consist of three elements:

  1. Question — questions that your users would ask. Preferably it should reflect wording specific to your domain.
  2. Context — meaning the passage of text that has the answer to the question
  3. Answer — preferably part of text taken straight from the context

These triplets serve as an input (question and context) and output (answer) to the model. In the real case scenario, the model itself would be preceded with a search engine that narrows down contexts in which you can find the answer.

Question answering systems are powered by neural networks.

The architecture that underlies the presented Q&A models has the following structure:

Source: https://arxiv.org/abs/1611.01603

Generally speaking, this particular model called BiDAF uses word and character embeddings, meaning that it encodes both words and letters. Later on, we pass these embeddings through layers of LSTMs which is a type of neural network useful in sequential data application, like language. In between the layers, we apply attention, which allows the model to focus on specific words and combine query with context. The output is the classification module that predicts the probability of each word being the beginning and the end of an answer. These models are complex and require many resources to train, but the results are highly impressive.

Thanks to the success of these extractive Q&A models, new architectures have been developed that consider reasoning based on several paragraphs or simply answering based on common sense. The whole idea might look complex, but it is actually very easy to take this architecture “off the shelf” and adjust it to your problem.

If you feel that question answering system might serve your needs and you would like to validate your idea — feel free to get in touch/reach out at Linkedin!

--

--

Dominika Basaj
Tooploox AI

ML researcher, with a focus on NLP&AI, especially explainable and robust neural network models.