What is ROC?

Data Science Interview Questions based on ROC.

Vimarsh Karbhari
Acing AI
3 min readMar 26, 2019

--

ROC curve is used to find out the accuracy of classifiers. In a data science interview, different questions might directly come up around ROC curves and AUC score. There are also some interview questions around classification where ROC curves could help the interviewee decide between different classifiers and make more educated judgements around what would work better on a particular problem or dataset. They are one of the most handy tools in a Data Scientist’s toolkit. The article aims to understand the basic concept of ROC and how to use them in the context of a problem and eventually, a data science interview.

Some Data Science Interview questions on ROC curves:

  • How can you plot ROC curves for multiple classes.
  • Calculate AUC of an ROC curve.
  • Explain specificity and sensitivity in the context of an ROC curve.
  • How do you determine if a classifier is better using an ROC curve?
  • Based on the ROC curve how can you determine the nature of the classifier being used?

The ROC-AUC score is used to find out the accuracy of the classifier.

ROC stands for Receiver Operating Characteristic. It’s is a type of curve. We draw the ROC curve to visualize the performance of the binary classifier. The ROC curve is a 2-D curve. It’s x axis represents the False Positive Rate (FPR) and its y axis represents the True Positive Rate (TPR). TPR is also known as sensitivity, and FPR is also known as specificity (SPC). You can refer to the following equations for FPR and TPR.

TPR = True Positive / Number of positive samples = TP / P

FPR = False Positive / Number of negative samples = FP / N = 1 — SPC

For any binary classifier, if the predicted probability is ≥ 0.5, then it will get the class label X, and if the predicted probability is < 0.5, then it will get the class label Y. This happens by default in most binary classifiers. This cut-off value of the predicted probability is called the threshold value for predictions. For all possible threshold values, FPR and TPR have been calculated. This FPR and TPR is an x,y value pair for us. So, for all possible threshold values, we get the x,y value pairs, and when we put the points on an ROC graph, it will generate the ROC curve. If your classifier perfectly separates the two classes, then the ROC curve will hug the upper-right corner of the graph. If the classifier performance is based on some randomness, then the ROC curve will align more to the diagonal of the ROC curve. Refer to the following figure:

ROC curve for different classification scores

In the preceding figure, the leftmost ROC curve is for the perfect classifier. The graph in the center shows the classifier with better accuracy in real-world problems. The classifier that is very random in its guess is shown in the rightmost graph. When we draw an ROC curve, how can we quantify it? In order to answer that question, we will introduce AUC. Our next article will talk about the AUC.

Subscribe to our Acing AI newsletter, I promise not to spam and its FREE!

Thanks for reading! 😊 If you enjoyed it, test how many times can you hit 👏 in 5 seconds. It’s great cardio for your fingers AND will help other people see the story.

Reference: ML Solutions

--

--