Life’s Pretty easy with SCIKIT-LEARN

In this article well try to implement a sklearn tf-IDF Vectorizer from scratch and cross check the output with the standard library output. SCIKIT-LEARN makes computing very easy by just executing a couple of lines to get the desired output.

Akshay J1n
Analytics Vidhya
5 min readJul 11, 2020

--

What does tf-IDF mean?

Tf-IDF stands for term frequency-inverse document frequency, tf-IDF weight may be a weight often employed in information retrieval and text mining. This weight is a probabilistic measure accustomed to evaluate how important a word is to a document in every collection or corpus. The preponderancy increases proportionally to the myriad times a word appears within the document but is offset by the frequency of the word within the corpus. Variations of the tf-IDF weighing scheme are often made use by search engines as a central tool in scoring and ranking a document’s relevance given a user query.

One of the best ranking functions is computed by aggregating the tf-IDF for every query term; more state of the art ranking functions are variants of this effortless model.
Tf-IDF is outstandingly used for stop-words filtering in various subject fields including text summarization and classification.

Standard definition take from below website

How to compute?

Typically, the tf-IDF weight generally consists of two terms: the primary compute the normalized Term Frequency (TF), aka. the several times a word appears in an exceeding document, divided by the entire number of words in this document; the second term is that the Inverse Document Frequency (IDF), computed because of the logarithm of the quantity of the documents within the corpus divided by the number of documents where the particular term appears.

TF: Term Frequency, which measures how customarily a term occurs in a document. Since every document is of different lengths, it is plausible that a term would appear much more time in long documents than shorter ones. Thus, the term frequency is commonly divided by the document length (aka. the overall number of terms within the document) as the simplest way of normalization:

IDF: Inverse Document Frequency, which quantifies how cardinal a term is. While computing term frequency, all terms are considered equally pivotal. It is known that certain terms, such as “but”, “a”, and “these”, may appear a lot of times but have little importance. Thus we need to weigh down the recurring terms while scaling up the sparse ones, by computing the following:

Example:

Let’s take a review in a corpus having 100 words wherein the word book pops up 5 times. The term frequency for book is then (5 / 100) = 0.05. Now, assume we have 10 million documents in a corpus and the word book appears in one thousand of these. Then, the inverse document frequency is calculated as log(10,000,000 / 1,000) = 4. The tf-IDF weight is the upshot of these quantities: 0.05 * 4 = 0.20.

Let’s build a TFIDF Vectorizer & compare its results with Sklearn:

compare the results of your own implementation of the TFIDF vectorizer with that of sklearns implementation TFIDF vectorizer.

Sklearn does few more tweaks in the implementation of its version of TFIDF vectorizer, so to replicate the exact results you would need to add following things to your custom implementation of TF-IDF vectorizer:

  1. Sklearn has its vocabulary generated from IDF sorted in alphabetical order
  2. Sklearn formula of IDF is different from the standard textbook formula. Here the constant “1” is added to the numerator and denominator of the IDF as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions.

3. Sklearn applies L2-normalization on its output matrix.

4. The final output of the sklearn TF-IDF vectorizer is a sparse matrix.

Steps to approach this task:

Let’s take a sample corpus:

  1. Write both fit and transform methods for your custom implementation of the TF-IDF vectorizer. Print out the alphabetically sorted vocab after you fit your data and check if its the same as that of the feature names from the sklearn TF-IDF vectorizer.

2. Print out the IDF values from your implementation and check if its the same as that of sklearns TF-IDF vectorizer IDF values.

3. Once you get your vocab and IDF values to be the same as that of sklearns implementation of the TF-IDF vectorizer, Make sure the output of your implementation is a sparse matrix. Before generating the final output, you need to normalize your sparse matrix using L2 normalization. You can refer to this link https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html

Now let’s compare our results by using scikit-learn standard tf-IDF vectorizer function

scikit-learn really makes our lives easy just by adding a couple of lines of code and we get the same output as we observed above.

SKLearn Implementation

--

--