Maximal Marginal Relevance to Re-rank results in Unsupervised KeyPhrase Extraction

Aditya Kumar
tech-that-works
Published in
4 min readOct 24, 2019

Maximal Marginal Relevance a.k.a. MMR has been introduced in the paper The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries. MMR tries to reduce the redundancy of results while at the same time maintaining query relevance of results for already ranked documents/phrases etc.

We first try to understand the scenario by taking an example and will see how MMR is helpful in solving the issue.

Recently I was trying to extract KeyPhrases from a set of documents that belongs to one category. I have used different approaches (TextRank, RAKE, POS tagging, etc.. to name a few) to extract keywords from the documents, which provides phrases along with score. This score is used as the ranking of the phrases for that document.

Let’s say your final keyPhrases are ranked like Good Product, Great Product, Nice Product, Excellent Product, Easy Install, Nice UI, Light weight etc. But there is an issue with this approach, all the phrases like good product, nice product, excellent product are similar and define the same property of the product and are ranked higher. Suppose we have a space to show just 5 keyPhrases, in that case, we don't want to show all these similar phrases.

You want to properly utilize this limited space such that the information displayed by the Keyphrases about the documents is diverse enough. Similar types of phrases should not dominate the whole space and users can see a variety of information about the document.

Ranking of Keyphrase Extraction

We are going to address this problem in this blog post. There might be different approaches to solve this problem. For the sake of simplicity and completeness of the article, I am going to discuss two approaches:

  1. Remove redundant phrases using cosine similarity:

To use cosine similarity is the naive approach that came to mind to deal with terms having the same meaning. Use word embeddings to find embeddings of phrases and find cosine similarity between embeddings. Set a threshold above which you will consider the terms as similar. Just take one keyPhrase having more score out of clubbed phrases in the result.

Remove Redundant Phrases

An issue with this approach is that you need to set the threshold (0.9 in code) above which, terms will be clubbed together. And sometimes very close keywords might have cosine similarity < threshold. Word embeddings have been used to convert the sentence to vector by averaging word tokens. Keeping the threshold low will lead to dealing with the same issue again. I find it difficult to manually tweaking this threshold to include all edge cases.

2. Re-Ranking the KeyPhrases using MMR

The idea behind using MMR is that it tries to reduce redundancy and increase diversity in the result and is used in text summarization. MMR selects the phrase in the final keyphrases list according to a combined criterion of query relevance and novelty of information.

The latter measures the degree of dissimilarity between the document being considered and previously selected ones already in the ranked list. [1]

MMR ranking provides a useful way to present information to the user that is not redundant. It considers the similarity of keyphrase with the document, along with the similarity of already selected phrases.

Maximal Marginal Relevance
where, Q = Query (Description of Document category)
D = Set of documents related to Query Q
S = Subset of documents in R already selected
R\S = set of unselected documents in R
λ = Constant in range [0–1], for diversification of results

In the below implementation of MMR, cosine similarity has been considered as Sim_1 and Sim_2. Any other similarity measure can be taken and the function can be modified accordingly.

Maximal Marginal Relevance

Setting λ to 0.5 gives the optimal mix of diversity and accuracy in the result set. The value of λ can be set based on the use-case and your dataset.

MMR helps to address the issue by ranking similar phrases far away. So the issue to select top N keyPhrase has been resolved as all similar terms are not grouped and don’t appear in the final result.

Please let me know if you like the post, or have some suggestions/concerns and feel free to reach out to me on LinkedIn.

References:

  1. The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries
  2. Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

--

--