Confusion Matrix explained with an example !

Rahul Reddy
the Data World
Published in
3 min readApr 7, 2020

Confusion matrix is usually a 2 x 2 table, that is used to evaluate the performance of a classification model on a test data for which the final outcome / class is already known. The same could be extended to multiple classes.

The rows of the confusion matrix represent the predicted class and the columns represent the actual class. The first column represents the Null Hypothesis, say the blood sample does not contain the virus or say the virus does not belong to class X. The second column represents the Alternate Hypothesis, which is the negation to Null Hypothesis, say the blood sample contains the virus or say the virus belongs to class X.

Let’s take it over from here using an example.

Suppose that we have built a new facial recognition model for an Airport Security Screening to check for a few suspects who are on the run. The model has to correctly identify the suspects and alert the officials. So, after the model was deployed into the public for a few months, we had some results. Let’s analyze those results now.

A total of 1000 people were screened through this model. Of which the model was able to correctly identify / classify a non-suspect as a non-suspect 750 times. Here the Null Hypothesis is that an individual is not a suspect. So, the model is accepting a Null Hypothesis, which is True (from the actual value). This is called as a True Positive and occupies the first position in the Confusion Matrix.

Now that we have seen the model identify the non-suspects, we will further look into how good our model could classify the suspects.

The model could successfully classify 150 suspects as suspects. This term is called as True Negative and occupies the last position in the confusion matrix. Here the model was able to reject the Null Hypothesis when the Null Hypothesis was also actually False in all these cases.

Now, let us look at the remaining two cases. The model classified 89 individuals as suspects but they were actually not on the suspects’ list. This case is similar to a False Alarm, here we are rejecting a Null Hypothesis that is actually True. These are called as False Positives or also termed as Type-I errors.

The final case is the most important in this scenario because we would not want the suspects to escape. Here the model classified 11 suspects as non-suspects, which means the classifier missed the targets or you can correlate to a Faulty Alarm which failed to ring at the time of calamity. The model accepted a Null Hypothesis which was actually False. This is termed as a False Negative or a Type-II error.

While building the model, we must always check for these Type-I and Type-II errors as these may be more critical / important based on the context of the problem, compared to the high accuracy of the model.

Originally published at http://thedataresearch.wordpress.com on April 7, 2020.

--

--