I recently uploaded a new version of A Metric Learning Reality Check to arXiv. Here are the highlights:
People were asking for examples to back up my claims about unfair comparisons, so I created a list of examples. See the screenshots below.
The typical metric learning paper presents a new loss function or training procedure, and then shows results on a few datasets, like CUB200, Stanford Cars, and Stanford Online Products. Every couple of months, we see the accuracy improve like clockwork.
Great, but there are a few caveats.
In order to claim that a new algorithm outperforms existing methods, it’s important to keep as many parameters constant as possible. That way, we can be certain that it was the new algorithm that…
Have you thought of using a metric learning approach in your deep learning application? If not, this is an approach you may find useful, especially if your deployed model will encounter unseen classes of data.
With the release of pytorch-metric-learning, it’s easier than ever to give metric learning a try!
Metric Learning refers to the task of learning distances or dissimilarities over a set of observations. We want to find a function that returns a small distance for similar observations and a large distance for different ones.
Ease of use