Boosting with AdaBoost and Gradient Boosting

azar_e
The Making Of… a Data Scientist
6 min readSep 20, 2018

--

Have you ever been or seen a Kaggle competition? Most of the prize winners do it by using boosting algorithms. Why is AdaBoost, GBM, and XGBoost the go-to algorithm of champions?

First of all, if you never heard of Ensemble Learning or Boosting check out my post “Ensemble Learning: When everybody takes a guess…I guess!” so you can understand better these algorithms.

More informed? Good, let’s start!

So, the idea of Boosting just as well any other ensemble algorithm is to combine several weak learners into a stronger one. The general idea of Boosting algorithms is to try predictors sequentially, where each subsequent model attempts to fix the errors of its predecessor.

Source

Adaptive Boosting

Adaptive Boosting, or most commonly known AdaBoost, is a Boosting algorithm. Shocker! The method this algorithm uses to correct its predecessor is by paying more attention to underfitted training instances by the previous model. Hence, at every new predictor the focus will be, each time, on the harder cases.

--

--