Confusion Matrix — Case Study

Prathamesh Mistry
Analytics Vidhya
Published in
4 min readJun 6, 2021

The following article will help you get over the confusion created due to Confusion Matix in a Classification Problem.

Nowadays machine learning and AI are used in almost every domain in the industry. Evaluating our models is as important as building them. Let’s try to understand the confusion matrix — an evaluation method and its attributes from a perspective of a Cybersecurity Risk Analyst.

Consider a scenario in which we have to detect the anonymous system which is hitting our private servers. We have put up a Machine Learning system in between the firewall and the server. So basically we are using machine learning to detect the malicious system that is able to pass through the firewall(s), that is we ought to classify whether the system passing through the firewall is malicious or non-malicious.

CASE SCENARIO

Thus, the scenario question is whether any anonymous and potentially malicious system is able to breach the system.

Let’s put up a null hypothesis —

Breach is detected into the server

Now we have to determine whether this hypothesis is to be rejected or are we failing to reject this hypothesis.

Suppose there are 10,000 total hits on the server in a day including all the employees from various departments of our company and the malicious users who are trying to steal your data or hack into your systems.

Our machine learning model has classified the following: —

Total hits to the system: 10000

  • Hits classified as Non-Malicious: 9465
  • Hits classified as Malicious: 535

After Further Investigation into the matter, we found that —

  • Out of those 9465 hits which were classified as non-malicious, 8809 of them were actually non-malicious.
  • Out of those 535 hits which were classified as malicious, 437 of them were actually malicious

This is known as the ground truth.

Now, will be evaluating our Model’s Predictions with respect to its Ground Truth. Our machine learning model is performing binary classification and we could use a number of metrics and plots to gather insights on the performance of our model. One of these evaluation metrics is the Confusion Matrix.

A confusion matrix is a table that allows visualization of the performance of an algorithm. Let’s look at what a confusion matrix looks like—

CONFUSION MATRIX

It’s a 2X2 Grid. The Columns represent the classes of the Ground Truth whereas the Rows represent the classes of the Predictions. A confusion matrix is a technique for summarizing the performance of a classification algorithm. There are 4 cells — TP, FP, FN, TN.

Let’s relate this table with our application…

Confusion matrix for our model

Let’s dig into this matrix…

  • Out of 10,000 records, our model predicted 535 records as malicious ( class 1) out of which 437 were actually malicious (class 1).
  • Out of 10,000 records, our model predicted 9465 as non-malicious (class 0) out of which 8809 were actually non-malicious (class 0).

TP — True Positives

True Positive is the total number of records that were correctly classified as malicious (class 1)

TN— True Negatives

True Negative is the number of total records that were correctly classified as non-malicious (class 0)

FP — False Positives

False Positives is the number of total records that were actually malicious (class 1) but were classified as non-malicious (class 0) by the model.

This is known as Type 1 error —

A Type I error is the rejection of the true null hypothesis. This means we reject the hypothesis “Breach is Detected” even there is an actual breach on the server.

FN — False Negatives

False Negatives is the number of total records that were actually non-malicious (class 0) but were classified as malicious (class 1) by the model.

This is known as Type 2 error

A Type II error is the non-rejection of the false null hypothesis. This means we failed to reject the hypothesis “Breach is Detected” even though there was no breach on the server.

Which error is potentially more dangerous and why?

Let’s look at what potential threats the two of these posses…

In the Type II error, we classified a system (non-malicious) of the company to be malicious. In this case, we could further investigate the user of this system to find out that this system was actually non-malicious and we could potentially drop this concern.

But in the case of Type I error, we classified a malicious system as non-malicious. This means this malicious system has bypassed our security and could cause data leaks and breach in the system. Thus, it is very crucial to keep this error as low as possible as it’s very dangerous.

That’s it for this one! Thank You!

--

--

Prathamesh Mistry
Analytics Vidhya

Final Year Student, understanding the industrial approach and tools