Fast Customer Care Response with Deep Learning(NLP) and Twitter Bot.

Stephen Oni
Analytics Vidhya
Published in
7 min readMar 15, 2020

--

Photo by Clem Onojeghuo on Unsplash

The frustration arising from the use of some institutional services, do lead to an urgent requirement of the attention of those specialized in such services. One of the most frustrating services do arise from the financial institution like banks, and most of this banks have a poor customer service to provide the immediate response needed by the customer, this thereby leading to a more frustrated customer who might end up not using the services again.

But sometimes we can’t blame the customer service for not doing their work by not responding fast, sometimes a lot of customers are demanding attention at the same time and it is not all that require such an urgent response. Due to this slag, customer care might be responding to the wrong person when the right person at the right time might be neglected and this can cause a major loss.

In line with the statement above, it shows there is a need to provide a system that learns to filter messages, thereby reducing noise and hence helps improve customer service.

In this article, we will be pooling data from twitter and also creating a twitter bot. This follows the previous post on how to create a twitter bot for data scientist.

Why use twitter? Twitter has to be one of the major platforms where people pour in their minds and share information. And since most of these organizations have a twitter account to respond to customers, it, therefore, makes it easy to pool data about bank’s customers verging out their issues.

What is the article about? It is about using Deep learning infusion with a Twitter bot to make customer service by each organization better, by reducing noise and thereby increasing the response rate. And the idea here can be expanded to other forms.

What we will try to achieve are:

  1. How to pull data from twitter
  2. How to clean our data
  3. How to label our data
  4. How to choose the right model
  5. How to infuse the model with a Twitter bot
  6. How to deploy the bot to aws.

How to pull data from twitter

Since our data will be based on the bank’s customers’ tweet, we will be gathering the username of almost all banks in Nigeria (since that’s where I’m based). This username will be store in a list, which will then be iterated through to get their customers to complain on Twitter.

We use api.Cursor to iterate through every page returned by the api.search which helps to fetch every tweet in which the banks included in the list. The -filter:retweets is used to filter out all retweet and only includes a direct tweet from the customers to the bank twitter account. After which a csv file is created for each banks pulled tweet.

How to clean our data

Some of the tweets may contain emojis, embedded urls, @ sign and the likes, it will be good we remove all this from the tweet text, in order to just train our model on pure text. This process is done for each bank’s csv file, after that the csv file is concatenated into a single file.

How to label the data

Labeling the data is the most crucial and tedious part because it determines the faith of our model and it does take a lot of boring time. This the point in which we classify the tweet as negative or positive. But Note; that our sense of negative or positive depends on what we consider the most needed attention tweet(emergency), so labeling a tweet means it needs urgent attention. So based on perspective the definition of what needs attention may be different for different people, hence authority in these should be able to define which is which.

To make the labeling fast, a UI is built with javascript, it provides easy navigation and temporary storage, which help to keep note where you stop in labeling when you are tired.

How to chose the right Model

The model used for this project could have been any NLP text classification algorithm provided by some libraries like Text blob for example. But for this project, I make use of ULMFIT a new state of the art transfer learning for NLP deep learning, proposed by Jeremy Howard and Sebastian Ruder.

The major reason for choosing it is because;

  1. Some of the text in the data is truncated, we want to be able to build a language model to rebuild some of this truncated text by predicting the next word that might come next.
  2. Some of the customer complaints, which requires urgent attention sometimes do contain pleasant statement at first (positive statement) and then follow by their complaint. Some time some text classification algorithm will classify this has neutral sometimes.

How does ULMFIT works;

novetta

First, we train a language model on wiki text 103, then this language model is fine-tuned on our banks’ tweet data to create a language model for it, after which a classifier is then being trained using the pre-trained weight of the language model to increase performance.

The code is fully based on the lesson of course v3 of Fastai, you can watch the video here

How to infuse the model with Twitter bot

After training the model, we try to save the weight which can be imported during the inference period. Fast ai provides a way to make inference simple without downloading the pretrained AWD_LSTM and not importing the data pickle file with load_data has shown in the code above. With the use of leaner.export() which then creates an export.pkl file, it gives the flexibility to do inference, just by calling load_learner(path) all necessary data and weight are imported.

To create twitter bot check out this my previous post.

The make_model() helps create the model for inference

The main objective of the bot is to see if any of the incoming tweets are classified has been negative, after then it sends an emergency email including the tweet URL for a proper view of the tweet to the provided email for the customer service.

To send an email, it is best to use SendGrid account instead of using Gmail API, to create a SendGrid account to obtain an API token check here

This helps sends the mail with a subject named Emergency status with the content including the link to the tweet status.

How to deploy the bot to AWS

First, we create a docker file to containerize the bot, which will make it easier to deploy on AWS , without code base environment issues.

then to build an image will run this in the terminal;

$ docker build . -t cus-bot

Then after, we test the docker image being created;

docker run -it -e CONSUMER_KEY="AFEFFEF2103Y80Y8" \
-e CONSUMER_SECRET="ASW34134FEFEFMNLN8YH8Y8" \
-e ACCESS_TOKEN="qqww23d43od94djdi3dj" \
-e ACCESS_TOKEN_SECRET="AASWD03993322" \
cus-bot

This will start the twitter bot,hence, if it works then we are good to go after then we need to convert the image to a .tar file to make it easy to integrate with AWS.

$ docker image save cus-bot -o cus-bot.tar
$ gzip cus-bot.tar

After the image has been zipped, we can then go ahead to create an instance on aws EC2, we generate the ec2 key inform of yourkey.pem , docker is then being installed on your instance. To get the full gist and quick explanation check out realpython.com , it gives you the step by step of going through the deployment on AWS.

Challenges

The only challenge encountered during this process was that the size of the docker image generated for the bot was around 2.0 gig and for my AWS EC2 am using a free trier which only gives 1gig for the Ubuntu 18.04 I choose. Although the docker image has been tested and is working properly, for example, I twitted this;

This is like a customer complaining and referencing my name in the tweet. The bot captures this tweet and processes it through our model to see if it is negative or not. If it is, I will receive an emergency email with the link to the tweet status, which I did receive;

To use the same model I used and create exactly the same bot, check the code here and download the weight here and the code for training the model including the UI for classifying can be obtained from github

Note the dataset used for this training is from a research work at Data Science Nigeria

--

--