17 Followers
·
Follow

Image for post
Image for post

We have seen the potential of transfer learning back in yearly 2015. There was no doubt that the model that was trained for some task after it was trained for a similar task, will perform better than if it was training from scratch. NLP is one of the biggest gainers of the transfer learning approach. Recent BERT achievements are proof.

As part of our transfer learning set of experiments, we attempted to fine-tune the model which was already fine-tuned on a similar dataset (for intent classification).

Dataset

There were 2 Domain-based datasets for Intent Classification given: db1 and db2. Each of them had 630 intents. db2 contained 2136 samples, 40% of which were taken for testing purposes. …


Image for post
Image for post

It’s been a while since our last classification model comparison post. About a year ago, Google published an article “Understanding searches better than ever before” and positioned BERT as one of its most important updates to the searching algorithms in recent years. BERT is a language representation model with impressive accuracy for many NLP tasks. If you understand better what people ask, you give better answers. Google says 15% of the Google queries are never seen before. Since then, BERT and BERT-like models (Transformers) have gained huge popularity in the world of NLP Research.

And of course, our team could pass by this novelty. Since Intent Classification has been the most important AI model for our Domain-based Customer Support Automation Project, we keep improving it by doing hundreds if not thousands of experiments. Here are some of them we would like to…


Image for post
Image for post

“AI”, “Machine learning”, “Bots” have been the most fashionable words in the world of technology for last 5 years. Before, when people heard the word “bot”, they usually would think about a robot or automated game player. Today, people think about Facebook, Telegram, Skype, Slack messenger built-in extensions which are able to perform a limited set of actions like retrieving some results on button click action. This kind of thinking makes it somehow stupid or at least not complicated. Maybe this is the reason why big companies like Interactions prefer to call their automated programs IVA (Intelligent virtual assistant) instead of just bots. Personally, I don’t see any difference in how to call them, bots or IVA. What is more important is that their capabilities are only limited by our imagination. …

About

AILabs

Conversational Commerce and Business Intelligence Company