Understand Evaluation Metrics of Object Detection: GloU, Objectness, Classification, Precision, Recall, mAP

Yutian Lei
3 min readMar 15, 2022

--

I was using YOLOv5 on Supervisely to do AI-assisted labeling and I came across the metrics in the tutorial which would be displayed as model results. I searched for these metrics and collected the information for a better understanding of them.

GloU

To illustrate Generalized Intersection over Union (GloU), we have to start with loU. The computation of IoU is pretty straightforward. IoU indicates how much bounding boxes overlap. If our prediction is perfect, two bounding boxes would be totally overlapped and the IoU would be 1.

Source: https://github.com/rafaelpadilla/Object-Detection-Metrics

However, IoU works in a way that it cannot indicate better predictions, by saying “better”, we mean prediction which is closer to the ground truth as long as no intersection. In the third picture below, the closer blue bounding box would be better than the ones that are far away, but they all share the IoU of 0 as there is no overlap.

IoU

Therefore, GIoU is developed in a way to measure the goodness of a prediction though there is no intersection between prediction and the ground truth. And in this Supervisely case, GIoU indicates GIoU Loss.

Source: https://giou.stanford.edu/
IoU vs GloU Source: https://www.youtube.com/watch?v=ENZBhDx0kqM

Objectness

Objectness quantifies how likely it is for an image window to contain an object of any class, such as cars and dogs, as opposed to backgrounds, such as grass and water.

Classification

Precision + Recall

mAP

Photo by Alvan Nee on Unsplash

Reference

--

--