Precision and Recall
The concept of precision and recall is very easy. But people get confused about these terms very often. That's why I decided to write in a simpler manner.
The Formula of precision is :
TP / TP + FP
Basically, it is true positive divided by actual positive. Where
TP = positive class predicted correctly
FP = positive class predicted incorrectly
So it is a proportion of positive identifications which was actually correct. If the model produces more false-positive cases the precision will be lower.
Note: If the model produces 0 FP cases then precision going to be 1.
The Formula of Recall is:
TP / TP + FN
Basically, it is a true positive divided by a predicted positive. Where
FN = negative class predicted incorrectly
So it is the proportion of positive identification which identified correctly.
One should maintain the threshold of the model in order to reduce error. To fully evaluate the model, Both precision and recall should be high.