Metrics to Evaluate your Machine Learning Algorithm: Accuracy, Precision, Recall, Specificity, and F1.
This article will discuss the most common ML model evaluation metrics such as Accuracy, Precision, Recall, Specificity, and F1 Score for a classification problem in the fintech space.
--
This article is part of a series where we walk step by step through solving fintech problems with different Machine Learning techniques using the “All lending club loan” dataset. Here you can find the complete end-to-end data science project for beginners to learn data science.
We have walked through the confusion matrix in the previous article, and I suggest you start from there.
If you are preparing for an interview, this article would help you to answer the following questions:
- What are precision and recall?
- What error metric would you use to evaluate how good a binary classifier is?
- How do you find the accuracy of a confusion matrix?
- What is F1 score in confusion matrix?
- What does 1% accuracy mean?
- What is the difference between precision and accuracy?
- In what cases you should not use accuracy as the main metric?
- What is Specificity?
- When should you use precision, and recall?
- What is a Negative predictive value?
The confusion matrix helps us visualize whether the model is “mistaken” in distinguishing between two classes. As you can see in the below picture, it is a 2x2 matrix. The row names are the actuals from the test set, and the column names are the ones predicted by the model.









