Member-only story
Mean Average Precision at K (MAP@K) clearly explained
One of the most popular evaluation metrics for recommender or ranking problems step by step explained
Mean Average Precision at K (MAP@K) is one of the most commonly used evaluation metrics for recommender systems and other ranking related classification tasks. Since this metric is a composition of different error metrics or layers, it may not be that easy to understand at first glance.
This article explains MAP@K and its components step by step. At the end of this article you will also find code snippets how to calculate the metric. But before diving into each part of the metric, let’s talk about the WHY first.
WHY use MAP@K?
MAP@K is an error metric that can be used when the sequence or ranking of your recommended items plays an important role or is the objective of your task. By using it, you get answers to the following questions:
- Are my generated or predicted recommendations relevant?
- Are the most relevant recommendations on the first ranks?