Quick Introduction to Ensemble Learning — Models, Assemble!

Limas Jaya Akeh
Bina Nusantara IT Division
2 min readDec 21, 2021
Photo by Jeswin Thomas on Unsplash

Avoiding the “real” photo so I don’t get into legal issues. No, really.

Sometimes, when developing a new Machine Learning project, we would be thinking to ourselves, which model to use? How can I achieve better accuracy without resorting to the newest State of the Art? (which might cost you a lot more resources and learning the model).

What If we just combine lots of models and let them decide the actual result?

Photo by Jaime Lopes on Unsplash

That’s what Ensemble Learning is in a nutshell — instead of relying on one model, why not throw lots of different models, and use the result of each model and make that as the result instead! That way, the result you achieve is a “general consensus” of what the models think.

Ensemble Learning from Wikipedia

Imagine 5 persons are being asked, “Will today Rain?”, 3 of them said “Yes”, while 2 said “No”, which one is true? Since majority of the consensus said “Yes”, then it is probably true. This (hopefully) creates a stronger prediction rather than just using one type of Machine Learning and hope that they correctly predict the result.

You just learned “Bagging”, or “Bootstrap Aggregating”, there are other methods too, such as “Boosting” and “Stacking”, while similar to Bagging, they have their own strengths and weaknesses. But at least, you now understand a bit about Ensemble Learning.

--

--