Essential Concepts You Should Understand Related to F1 Score, Accuracy, Precision & Recall

Amjad El Baba
3 min readSep 26, 2021

--

becominghuman.ai

Why You Should Take Into Consideration These 4 Things?

Like every project or any code sample you are working on, what really matters is how accurate is this project and how well does it perform in a real world practice. So, today we will discover these main points so we can scale-up our knowledge and in order to draw a better picture of this concept.

First, let's take a look of the base of evaluating these definitions which is Confusion Matrix

Confusion Matrix

towardsdatascience.com

Confusion matrix is mainly used to show & evaluate the performance of a certain classification model in which we know what are the real positive values that are true among the data set.

  • TP: true positive, where the predicted output is the same as the actual one (both are positive).
  • FP: false positive, where the predicted output is positive while the actual is negative.
  • FN: false negative, where the predicted output is negative while the actual is positive.
  • TN: true negative,where the predicted output is the same as the actual one (both are negative).

F1 Score,Accuracy, Precision & Recall

Accuracy

Accuracy is the performance measure of how good was our model? It's equation is simple, described by a ratio of correctly predicted observation to the total observations.

Accuracy with high value is so important in totally showing that our model is perfectly working, but only in datasets having the same FP and FN (symmetric), if our dataset isn't of that type, here, other parameters should be taken into conisderation in the evaluation process.

Accuracy = TP+TN/TP+FP+FN+TN

Precision

Described by the ratio of the actually true positive predictions divided by the total predicted positive observations.

Precision = TP/TP+FP

Recall

Recall is one of these 4 parameters that elaborates the number of correct positive predictions out of all positive predictions that could have been made.

Please take a look on this great example in order to totally understand this concept.

en.wikipedia.org

Recall = TP/TP+FN

F1 score

F1 Score is the weighted average of Precision and Recall. This score mainly takes into consideration both FP and FN. An F1 score is considered perfect when it’s 1 , while the model is a total failure when it’s 0. In some cases, F1 is more preferable and useful than accuracy, especially if you have an uneven class distribution or a biased distribution. When having a similar cost in comparison between FP and FN, here accuracy performs better. If the cost of false positives and false negatives are very different, it’s better to look at both Precision and Recall.

F1 Score = 2*(Recall * Precision) / (Recall + Precision)

In order to explore more:

Accuracy, Precision, Recall & F1 Score: Interpretation of Performance Measures.

Thanks for your time and let’s boost our knowledge!

--

--

Amjad El Baba

Data enthusiast, developer, hard worker and a detail-oriented person who enjoys team work and leadership. Passionate about AI, Big Data and Data Science.