Techniques Combining Discriminative and Generative Approaches for Classification

You mean that we can use the best of both worlds?

Jan 24 · 5 min read

Recently I’ve been learning about Discriminative and Generative Modeling in honor of a breakdown of a very special paper, coming soon. Learning about these topics, I was fascinated by the nuances behind these two approaches, and how they are implemented. As I learned more, I came across the fact that models now tend to combine the best of both worlds. In this article I will be going over some of the hybrid approaches, talking about the problems they solve. By the end, you will hopefully have some knowledge of these approaches and may even choose to implement one of these methods(or a variant) for your problems. If you like this kind of content, be sure to let me know in the comments. If you like this kind of content, be sure to check out my other work.

What are these techniques?

So a quick overview of the techniques. In a nutshell, Discriminative Classification learns the decision boundary between different classes. In a practical sense, if we had the example of differentiating between a snake and a dog, the classifier would learn the differences between them and classify the input accordingly. It wouldn’t try to learn what makes a snake a snake, and what defines a dog. A generative model, on the other hand, will try to learn the underlying distribution of the classes and classify input accordingly. To get a more thorough understanding of this concept, check out this video, made by yours truly. Be sure to leave any feedback, as it helps me up the quality of my work. And remember, be sure to like and sub :).


Now that you’ve hopefully understood the concept, we can go into exploring some of the different hybrid implementations, and what aspects of the two approaches they implement. In reality, there isn’t as much of a distinction between the two, and one could be derived from another, but differentiating between them is a good way to think of them. That being said here we go…


The Detective would be the discriminator

As alluded to in the videos, GANs lend themselves naturally to a hybrid nature. In our picture example, we use Generator to create forgeries, which we use a discriminator to train in detecting. This naturally plays to their strengths, since discriminators are able to train with lots of data (real-world + generator created) to develop great boundaries, while generators get constant feedback, allowing them to get better.

Hybrid Inference

Lorenz Systems are systems that are inherently deterministic (are defined by rules), but tweaking minor values in the initial conditions will drastically change results. If this sounds familiar, this is the mathematical formulation of what is popularly called the butterfly effect. In the paper, “Combining Generative and Discriminative Models for Hybrid Inference”, the authors combine the two approaches to create a super model which “can estimate the trajectory of a noisy chaotic Lorenz Attractor much more accurately than either the learned or graphical inference run in isolation”. The results of this paper were pretty definitive.

Performance of protocols

The hybrid models here have good performance across the board when it comes to predicting paths. We see the error converge with the Graph Neural Networks at a high number of training samples, but performance across the board is better for the hybrid approach. For a more directly visual result:

Using it to improve active learning (medical stuff)

Next up is a paper called, “Combining Generative and Discriminative Models for Semantic Segmentation of CT Scans via Active Learning”. Here we see the results of our combination through this fine quote:

Our algorithm is assessed on a database of 196 labeled clinical CT scans with high variability in resolution, anatomy, pathologies, etc. Quantitative evaluation shows that, compared with randomly selecting the scans to annotate, our method decreases the number of training images by up to 45%. Moreover, our generative model of body shape substantially increases segmentation accuracy when compared to either using the discriminative model alone or a generic smoothness prior (e.g. via a Markov Random Field).

If you’re a fan of comparisons:


Here are some of the ways they can be combined to improve the learning process of algorithms across various problems in diverse fields. By combining the two approaches, our hybrid can give us great results. There are lots of other instances, such as implementing a hybrid to improve semi-supervised Bayesian Learning, but I didn’t want this article to get very long. If you would like a breakdown of the papers mentioned, be sure to follow me here. I will be breaking down these papers (and others)soon. If this is an interesting topic (how techniques are used) be sure to let me know in the comments, so I can continue this.

Reach out to me

Thank you for reading this. I am dropping all my relevant social media below. Follow any (or all) to see my content across different platforms. I like to use the strengths of different platforms. Leave any feedback you might have, as it really helps a growing content creator like myself. If you found this useful, please share the article. These articles take time to research and write, so any help goes a long way. If you want a free stock, use my Robinhood referral link. It’s free money for both of us, and investing is a great way to build your assets for the future. You’re losing out on a free stock with no catch, so there’s no reason not to open your account there.

I’ve shortened the URLs using this great service. They do great work, so show them some love. This is not sponsored, but it’s always good to promote useful work.

Check out my other articles on Medium. :

My YouTube. It’s a work in progress haha:

Reach out to me on LinkedIn. Let’s connect:

My Twitter:

My Substack:

If you would like to work with me email me:

Live conversations at twitch here:

To get updates on my content- Instagram:

Get a free stock on Robinhood:


If you’re interested in the slides I used for my videos, here you go.

The Startup

Get smarter at building your thing. Join The Startup’s +731K followers.