Most Common Loss Function In Deep Learning

Fraidoon Omarzai
2 min readJul 23, 2024

--

In deep learning, loss functions are crucial for training models as they measure the difference between the predicted output and target values.

Regression Loss Function

1. Mean Absolute Error (MAE)

  • Measures the average absolute difference between actual and predicted values.
  • It’s robust to outliers compared to MSE.
  • It may not penalize large errors as heavily.
  • Also known as L1 Loss.

2. Mean Squared Error (MSE)

  • Measures the average squared difference between actual and predicted values.
  • It’s widely used for regression tasks.
  • It’s good for minimizing overall error but sensitive to outliers.
  • Also known as L2 Loss.

3. Huber Loss

  • This combines the characteristics of MSE and MAE.
  • It’s less sensitive to outliers than MSE.
  • δ is the delta parameter, which determines the threshold for switching between the quadratic and linear components of the loss function.

Classification Loss Function

1. Binary Cross-Entropy Loss (Log Loss)

  • Used for binary classification tasks.
  • It measures the performance of a classification model whose output is a probability value between 0 and 1.

2. Categorical Cross-Entropy Loss

  • Used for multi-class classification tasks.
  • It generalizes binary cross-entropy to multiple classes.

3. Sparse Categorical Cross-Entropy Loss

  • Similar to categorical cross-entropy, it is used when target labels are integers rather than one-hot encoded vectors.

--

--

Fraidoon Omarzai

AI Enthusiast | Pursuing MSc in AI at Aston University, Birmingham