Confusion Matrix is NOT Confusing!
So you want to be a data scientist and you got into an interview for it. They asked you to evaluate this confusion matrix from diabetes data on your interview. What would you say ?
Let’s go step by step while looking at the GIF below, that i made. Our predictions can be Negative or Positive while Actual values can be True or False.
- Replace predictions -which is our second value- with :
0 => Negative , 1 => Positive
predictions. - If patient is not diabetic and we have predicted that patient is not diabetic(N) right? Then we made a TRUE Prediction. (TN)
If patient is diabetic and we have predicted that patient is diabetic(P), again we made a TRUE prediction. (TP) - Now we have only False values left.
If patient is diabetic, and we have predicted that patient is not diabetic(N), so we made a FALSE prediction. (FN)
If patient is not diabetic, and we have predicted that patient is diabetic(P), again we made a FALSE prediction. (FP)
Now let’s check out our NOT confusing confusion matrix again !
True predictions : 77 TN, 36 TP
False predictions: 14 FN, 27 FP
Calculating evaluation metrics:
Accuracy :
77+36 / 77+36+14+27
= 0.73376..
Precision: 36 / 36+27 = 0.5714..
Recall : 36 / 36+14 = 0.72
F1 Score : 0.64
That’s all!
Thanks for reading!
Please don’t forget to comment if you have any feedback. :)