Removing Confusion from Confusion Matrix

Harshal Kothawade
Analytics Vidhya
Published in
5 min readMay 27, 2021

--

Introduction

Confusion Matrix as the word indicates is really confusing (And maybe that’s why it is named that way). So in this article, I will try to remove your confusion from Confusion Matrix with some basic terminologies and examples. So let’s start!

Before we move to understand the Confusion Matrix, let’s understand why and what of confusion matrix.

Why confusion matrix?

Confusion Matrix is used to analyse the performance of classification problems (Classification problems are the ones where the target variable is of categorical class). It gives you an idea about performance of each and every individual class.

What is the confusion matrix?

The confusion matrix is created with Actual results and Predicted results, combined in a way so we know what we are able to do well and what went wrong in the prediction.

In a row we have actual value classes and in columns we have predicted value classes. In general, the confusion matrix is square matrix i.e., number of columns always equal to number of rows. If not, your predictions are either missing an existing class or adding some new ones.

Let us take a simple example of a predicted variable having two classes, Class 0 (Negative Class) and Class 1 (Positive Class). Please note that sometimes your class 0 might be a Positive class and class 1 is Negative one.

How to interpret the values?

There are four possible values the confusion matrix will have for the 2X2 matrix.

  • True Negative (TN)
  • False Positive (FP)
  • False Negative (FN)
  • True Positive (TP)

We will understand all of these one by one. But for now, just split each term into separate words based on position. So on the first position we have the word TRUE or FALSE and on the second position POSITIVE or NEGATIVE. We will analyse these terms separately:

  • TRUE / FALSE: We have two values here, predicted value and actual value. If both values are the same, put TRUE in first position or else put false. So in simple terms, if actual and predicted are same then TRUE else FALSE →(Actual = Predicted then True Else False)
  • POSITIVE / NEGATIVE: If predicted class is 0 then it is NEGATIVE and if it is class 1 then it is POSITIVE. So if the predicted value is 0 then NEGATIVE else POSITIVE (Note above is only true, if you have only two classes 0 and 1 as we have assumed earlier and your class 0 is Negative and class 1 is Positive as per assumptions) → (Predicted = 0 then Negative Else Positive)

Now by taking distinct rows from above example, We will see each case separately:

True Negative (TN)

  • Actual Value: 0
  • Predicted Value: 0
  • Actual and Predicted are same, so TRUE
  • Predicted is class 0, so NEGATIVE
  • Final Answer: True Negative

False Positive (FP)

  • Actual Value: 0
  • Predicted Value: 1
  • Actual and Predicted are not same, so FALSE
  • Predicted is class 1, so POSITIVE
  • Final Answer: False Positive

False Negative (FN)

  • Actual Value: 1
  • Predicted Value: 0
  • Actual and Predicted are not same, so FALSE
  • Predicted is class 0, so NEGATIVE
  • Final Answer: False Negative

True Positive (TP)

  • Actual Value: 1
  • Predicted Value: 1
  • Actual and Predicted are same, so TRUE
  • Predicted is class 1, so POSITIVE
  • Final Answer: True Positive

Different Ratios to remember

Accuracy

  • Accuracy gives you an idea about how many observations are you able to predict correctly out of total observations.
  • Formula → Accuracy = (TP + TN) / (TP + FP + FN + TN)
  • Useful when classes are not imbalanced

Precision

  • Precision provides the idea that out of total observation which you predicted as class 1 or positive class are correctly predicted as class 1 or positive class.
  • Formula → Precision = TP / (TP + FP)

Recall

  • Recall provides the idea that out of total observations which are actually class 1 or positive class are correctly predicted as class 1 or positive class.
  • Formula → Recall = TP / (TP + FN)

Specificity

  • Specificity provides the idea that out of total observations which are actually class 0 or negative class are correctly predicted as class 0 or negative class..
  • Formula → Specificity = TN / (TN + FP)

Sensitivity

  • Sensitivity is the same as Recall. It provides the idea that out of total observations which are actually class 1 or positive class are correctly predicted as class 1 or positive class.
  • Formula → Sensitivity = TP / (TP + FN)

Sample Code

Sample Classification Matrix Code
Code Output

Conclusion

The confusion matrix which seems actually confusing on the first attempt is not that confusing at all. The similar logic can be extended for the higher dimension confusion matrix as well. Hopefully this article helped you in avoiding the confusion in future.

--

--

Harshal Kothawade
Analytics Vidhya

Data Scientist well versed in statistical learning, machine learning and deep learning algorithms. Passionate about data and visualizations.