xgboost: “Hi I’m Gamma. What can I do for you?” — and the tuning of regularization

Hi Laurae, Great post. One thing that i wanted to understand mathematically was: I understand that the classification trees in xgb are built using logloss optimization and not information gain/gini. How is the gamma lagrange’s equation incorporated in the the loss function in xgboost? Gamma according to the docs is the minimum loss reduction required for the split which intuitively seems a cutoff at every split (till max depth) but as far as i understand lagrange’s penalty is like an equation along which only the parameters can change while optimizing. Please let me know. Thanks

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.