MAE and RMSE — Which Metric is Better?

Mean Absolute Error versus Root Mean Squared Error

JJ
Human in a Machine World

--

Mean Absolute Error (MAE) and Root mean squared error (RMSE) are two of the most common metrics used to measure accuracy for continuous variables. Not sure if I’m imagining it but I think there used to be a time when there were a lot more published MAE results. It seems that publications I come across now mostly use either RMSE or some version of R-squared.

Is RMSE actually better in most cases? When would it be better to use MAE? I wanted to dig into these two questions a bit because I find myself using RMSE often because it’s been programmed as the default modeling metric.

Definitions

Mean Absolute Error (MAE): MAE measures the average magnitude of the errors in a set of predictions, without considering their direction. It’s the average over the test sample of the absolute differences between prediction and actual observation where all individual differences have equal weight.

If the absolute value is not taken (the signs of the errors are not removed), the average error becomes the Mean Bias Error (MBE) and is usually intended to measure average model bias. MBE can convey useful information, but should be interpreted cautiously because positive and negative errors will cancel out.

Root mean squared error (RMSE): RMSE is a quadratic scoring rule that also measures the average magnitude of the error. It’s the square root of…

--

--