The Startup
Published in

The Startup

Loss Functions in Machine Learning

A small tutorial or introduction about common loss functions used in machine learning, including cross entropy loss, L1 loss, L2 loss and hinge loss. Practical details are included for PyTorch.

Cross Entropy

Image from this post
Image from this post
from torch import nn
criterion = nn.CrossEntropyLoss()
input = torch.tensor([[3.2, 1.3,0.2, 0.8]],dtype=torch.float)
target = torch.tensor([0], dtype=torch.long)
criterion(input, target)
Out[55]: tensor(0.2547)
  • Cross-Entropy = 0.00: Perfect probabilities.
  • Cross-Entropy < 0.02: Great probabilities.
  • Cross-Entropy < 0.05: On the right track.
  • Cross-Entropy < 0.20: Fine.
  • Cross-Entropy > 0.30: Not great.
  • Cross-Entropy > 1.00: Terrible.
  • Cross-Entropy > 2.00 Something is broken.

Root Mean Squared Error and others

Hinge loss

Image source: post
  • If an instance is classified correctly and with sufficient margin (distance > 1), the loss is set to 0
  • If an instance is classified correctly but very close to the margin (0<distance<1)
  • If an instance is miss classified, a positive loss is used as penalty proportional to the distance
From Wikipedia

Notes

  1. Cross entropy part of this blog draws many inspirations from: https://towardsdatascience.com/cross-entropy-loss-function-f38c4ec8643e
  2. Hinge loss part uses https://towardsdatascience.com/a-definitive-explanation-to-hinge-loss-for-support-vector-machines-ab6d8d3178f1 as reference

--

--

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +768K followers.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store