Generative Classifiers V/S Discriminative Classifiers
Generative Classifiers tries to model class, i.e., what are the features of the class. In short, it models how a particular class would generate input data. When a new observation is given to these classifiers, it tries to predict which class would have most likely generated the given observation. Such methods try to learn about the environment. An example of such classifiers is Naive Bayes. Mathematically, generative models try to learn the joint probability distribution, p(x,y), of the inputs x and label y, and make their prediction using Bayes rule to calculate the conditional probability, p(y|x), and then picking a most likely label. Thus, it tries to learn the actual distribution of the class.
Discriminative Classifiers learn what the features in the input are most useful to distinguish between the various possible classes. So, if given images of dogs and cats, and all the dog images have a collar, the discriminative models will learn that having a collar means the image is of dog. An example of a discriminative classifier is logistic regression. Mathematically, it directly calculates the posterior probability P(y|x) or learn a direct map from input x to label y. So, these models try to learn the decision boundary for the model.
Both methods use conditional probability to classify but learn different types of probabilities to generate conditional probability. It is seen, in most of the classification tasks, discriminative classifiers are often more accurate, so they are most commonly used. One of the reasons for the discriminative classifier is more accurate as it tries to directly solve the classification task, rather than trying to solve a general problem as an intermediate step as generative models do.