TensorFlow helps NAVER Communicate with their customers
Posted by Seongjin Shin, NAVER Clova AI Business
Chatting with our Customers:
At NAVER, we needed to provide our customers with digital services to improve their experiences. Customers often have standard questions and would like to speak to a representative to quickly and accurately find solutions. The best way to handle the variety and scale of inquiries we received was through chatbots however, we needed to ensure they could properly respond to the specific needs of each customer. Since rule-based models might not accurately reflect a customer’s needs or questions nor evolve with inquiry changes, we relied on machine learning to provide the best service. So we turned to TensorFlow to create a dialogue model that would best understand the intent behind customer’s needs and allow us to properly service their requests and we launched the chatbot service with NAVER Cloud Platform (NCP). Our chatbot service provides fast and accurate dialogue model linked with various messaging channels.
TensorFlow and the flow of dialogue
We chose TensorFlow because of its ability to support various ecosystems and ease of building models, which allowed us to provide a breadth of services for our customers. Generally, it takes quite a bit of time to build a research model, test it, and reflect latency for actual services. However, our team built a baseline chatbot model (seq2seq) using Python TensorFlow, and utilized training results using Java/Scala TensorFlow to build a model that was able to handle multiple requests from users. In order to utilize our service product, our model, the library should support various languages and environment. Not many machine learning library supports this. In addition to that, TensorFlow has widely been used by researchers and developers, our team can easily develop and customize the state of the art models.
The builds were based on specific tasks such as NER, Intent classifier, conversation model (multi-turns), and Auto-ML. We also pulled model structure ideas from Seq2Seq, Transformer, and pre-trained models such as BERT and optimized the models to handle massive requests for the user experience. From many researches and internal benchmarks, our team successfully launched the chatbot service with competitive dialogue model among the other chatbot builders.
Summary of Build Procedure
In order to build the dialogue model, we first used conversation (data text) to train our model via our chatbot builder tools. This is a web console which enables you to create and test chatbot conversation data and provides various features required to develop chatbots. You can also upload with Json and Excel files.
Once you input your dataset, and click “Model Learning” buttons on the top of the chatbot builder, the model learns in order to understand the user’s intent and context well.
Natural language processing (NLP) analyzes morphemes of sentences added to the questions and answers.
- Natural language understanding (NLU) determines what the sentence means, and which answer is most associated with it. In addition, the model learning proceeds with general entity mapping by using NAVER’s data dictionaries. If only the entities required to be learned are tagged in a specific domain, the chatbot engine learns those tagged.
- Learning may take several minutes or even hours depending on the data size. NAVER Cloud Platform (NCP) uses GPUs for quick learning.
- Once every learning procedure is finished, Custom estimator features powered by TensorFlow converted them into frozen graph PB (protocol buffer) files and deploy to the server to interact with customers. Users can connect with various of channels such as LINE, Facebook, and even custom api gateways.
Chatbot quickstart guide is located in here: http://dev-docs.ncloud.com/en/chatbot/chatbot-1-1.html
TensorFlow allowed us to change the way we interacted with our customers. Through its ease of use, we were able to build models to address the needs of our customers. The framework makes it possible for our team to easily build models, support multiple languages, and run a stable community. All of this improves our interactions with our customers, which was important to us. With our success with TensorFlow, we will continue using the platform continue to improve our service by adding more features and enhancing conversation models through research. We also plan on making our model open-sources so others can benefit.