Understand Evaluation Metrics of Object Detection: GloU, Objectness, Classification, Precision, Recall, mAP
I was using YOLOv5 on Supervisely to do AI-assisted labeling and I came across the metrics in the tutorial which would be displayed as model results. I searched for these metrics and collected the information for a better understanding of them.
GloU
To illustrate Generalized Intersection over Union (GloU), we have to start with loU. The computation of IoU is pretty straightforward. IoU indicates how much bounding boxes overlap. If our prediction is perfect, two bounding boxes would be totally overlapped and the IoU would be 1.
However, IoU works in a way that it cannot indicate better predictions, by saying “better”, we mean prediction which is closer to the ground truth as long as no intersection. In the third picture below, the closer blue bounding box would be better than the ones that are far away, but they all share the IoU of 0 as there is no overlap.
Therefore, GIoU is developed in a way to measure the goodness of a prediction though there is no intersection between prediction and the ground truth. And in this Supervisely case, GIoU indicates GIoU Loss.
Objectness
Objectness quantifies how likely it is for an image window to contain an object of any class, such as cars and dogs, as opposed to backgrounds, such as grass and water.
Classification
Precision + Recall
mAP