Caglar Subasi
Sep 5, 2018 · 1 min read

They get complex by adding new weak models to the previous ones, like 3D printers, layer by layer. On the other hand, ANN models begin their life with complexity!

The goal of ‘Boosting’ is to reduce the ‘bias’, which causes ‘under-fitting’ (no relation btw explanatory and response variables).

The goal of ‘Bagging’ is to reduce the ‘variance’, which causes ‘over-fitting’ (fluctuation in responses).

Random Forest models uses ‘Bagging’ technique with ‘Decision Tree’ type trees, while Gradient Boosting models uses ‘Boosting’ technique with ‘Regression Tree’ type trees.

Decision Trees are useful for categorical responses, while Regression Trees are useful for numeric ones.

Boosting is actually a sequentially iterative procedure that we generate new ‘weak’ models which are dedicated to the defections (wrong predictions) of previous ‘weak’ models.

    Caglar Subasi

    Written by

    DataScientist@BNPParibas, MSDS@ITU, BSDS@METU

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade