Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Follow publication

Member-only story

Mean Average Precision at K (MAP@K) clearly explained

7 min readJan 18, 2023

--

Photo by Joshua Burdick on Unsplash.

Mean Average Precision at K (MAP@K) is one of the most commonly used evaluation metrics for recommender systems and other ranking related classification tasks. Since this metric is a composition of different error metrics or layers, it may not be that easy to understand at first glance.

This article explains MAP@K and its components step by step. At the end of this article you will also find code snippets how to calculate the metric. But before diving into each part of the metric, let’s talk about the WHY first.

WHY use MAP@K?

MAP@K is an error metric that can be used when the sequence or ranking of your recommended items plays an important role or is the objective of your task. By using it, you get answers to the following questions:

  • Are my generated or predicted recommendations relevant?
  • Are the most relevant recommendations on the first ranks?

Making the following steps easier to understand

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.