Real-Time Text Classification Using Kafka and Scikit-learn

Velotio Technologies
Velotio Perspectives
5 min readFeb 7, 2019

Introduction:

Text classification is one of the essential tasks in supervised machine learning (ML). Assigning categories to text, which can be tweets, Facebook posts, web page, library book, media articles, gallery, etc. has many applications like spam filtering, sentiment analysis, etc. In this blog, we build a text classification engine to classify topics in an incoming Twitter stream using Apache Kafka and scikit-learn — a Python-based Machine Learning Library.

Let’s dive into the details. Here is a diagram to explain visually the components and data flow. The Kafka producer will ingest data from Twitter and send it to Kafka broker. The Kafka consumer will ask the Kafka broker for the tweets. We convert the tweets binary stream from Kafka to human-readable strings and perform predictions using saved models. We train the models using Twenty Newsgroups which is a prebuilt training data from Sci-kit. It is a standard data set used for training classification algorithms.

In this blog we will use the following machine learning models:

We have used the following libraries/tools:

  • tweepy — Twitter library for python
  • Apache Kafka
  • scikit-learn
  • pickle — Python Object serialization library

Let’s first understand the following key concepts:

  • Word to Vector Methodology (Word2Vec)
  • Bag-of-Words
  • tf-idf
  • Multinomial Naive Bayes classifier

Word2Vec methodology

One of the key ideas in Natural Language Processing(NLP) is how we can efficiently convert words into numeric vectors which can then be given as an input to machine learning models to perform predictions.

Neural networks or any other machine learning models are nothing but mathematical functions which need numbers or vectors to churn out the output except tree-based methods, they can work on words.

For this, we have an approach known as Word2Vec. A very trivial solution to this would be to use “one-hot” method of converting the word into a sparse matrix with only one element of the vector set to 1, the rest being zero.

For example, “the apple a day the good” would have the following representation:

Here we have transformed the above sentence into a 6×5 matrix, with the 5 being the size of the vocabulary as “the” is repeated. But what are we supposed to do when we have a gigantic dictionary to learn from say more than 100000 words? Here one hot encoding fails. In one hot encoding, the relationship between the words is lost. Like “Lanka” should come after “Sri”.

Here is where Word2Vec comes in. Our goal is to vectorize the words while maintaining the context. Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words.

Tf-idf (term frequency–inverse document frequency)

TF-IDF is a statistic which determines how important is a word to the document in given corpus. Variations of tf-idf is used by search engines, for text summarizations etc. You can read more about tf-idf — here.

Multinomial Naive Bayes classifier

Naive Bayes Classifier comes from a family of probabilistic classifiers based on Bayes theorem. We use it to classify spam or not spam, sports or politics etc. We are going to use this for classifying streams of tweets coming in. You can explore it — here.

Let’s see how they fit in together.

The data from the “20 newsgroups datasets” is completely in text format. We cannot feed it directly to any model to do mathematical calculations. We have to extract features from the datasets and have to convert them to numbers which a model can ingest and then produce an output.
So, we use Continuous Bag of Words and tf-idf for extracting features from datasets and then ingest them to multinomial naive bayes classifier to get predictions.

1. Train Your Model

We are going to use this dataset. We create another file and import the needed libraries We are using sklearn for ML and pickle to save the trained model. Now we define the model.

2. The Kafka Tweet Producer

We have the trained model in place. Now let's get the real-time stream of Twitter via Kafka. We define the Producer.

Now we will define Kafka settings and will create KafkaPusher Class. This is necessary because we need to send the data coming from tweepy stream to Kafka producer.

Note — You need to start Kafka server before running this script.

3. Loading your model for predictions

Now we have the trained model in step 1 and a twitter stream in step 2. Let's use the model now to do actual predictions. The first step is to load the model:

Then we start the Kafka consumer and begin predictions:

  • RT @amazingatheist: Making fun of kids who survived a school shooting just days after the event because you disagree with their politics is… => talk.politics.misc
  • https://twitter.com/i/web/status/966225658983587841 => sci.med
  • RT @DavidKlion: Apropos of that D’Souza tweet; I think in order to make sense of our politics, you need to understand that there are some t… => talk.politics.misc
  • RT @BeauWillimon: These students have already cemented a place in history with their activism, and they’re just getting started. No one will… => talk.politics.misc
  • RT @byedavo: Cause we ain’t got no president https://t.co/yPVosCKPFm => talk.politics.misc
  • RT @appleinsider: .@Apple reportedly in talks to buy cobalt, key Li-ion battery ingredient, directly from miners https://t.co/onkaB3zRO0 ht… => comp.sys.mac.hardware

Here is the link to the complete git repository: https://github.com/velotio-tech/kafka-ml

Conclusion:

In this blog, we were successful in creating a data pipeline where we were using the Naive Bayes model for doing classification of the streaming Twitter data. We can classify other sources of data like news articles, blog posts etc. Do let us know if you have any questions, queries and additional thoughts in the comments section below. Happy coding!

**************************************************************

This post was originally published on Velotio Blog.

Velotio Technologies is a software engineering firm, with core expertise in Data Science, Machine Learning and DevOps. Our modus operandi is working with the latest transformative tech to turbocharge customer success.

Interested in learning more about us? We would love to connect with you on ourWebsite, LinkedIn or Twitter.

*******************************************************************

--

--

Velotio Technologies
Velotio Perspectives

Velotio Technologies is an outsourced software and product development partner for technology startups & enterprises. #Cloud #DevOps #ML #UI #DataEngineering