FAU Lecture Notes in Pattern Recognition

Which Loss Function does Adaboost actually optimize?

Adaboost & Exponential Loss

Andreas Maier
CodeX
Published in
7 min readMay 6, 2021

--

Image under CC BY 4.0 from the Pattern Recognition Lecture.

These are the lecture notes for FAU’s YouTube Lecture “Pattern Recognition”. This is a full transcript of the lecture video & matching slides. The sources for the slides are available here. We hope, you enjoy this as much as the videos. This transcript was almost entirely machine generated using AutoBlog and only minor manual modifications were performed. If you spot mistakes, please let us know!

Navigation

Previous Chapter / Watch this Video / Next Chapter / Top Level

Welcome back to Pattern Recognition. Today we want to continue looking into AdaBoost and in particular, we want to see the relation between AdaBoost and the exponential loss.

Image under CC BY 4.0 from the Pattern Recognition Lecture.

Boosting fits an additive model in a set of elementary basis functions. So the results of boosting are essentially created by expansion the coefficients β and some b, which is a basis function given a set of parameters γ. Additive expansion methods are very popular in learning techniques, you can see that…

--

--

Andreas Maier
CodeX
Writer for

I do research in Machine Learning. My positions include being Prof @FAU_Germany, President @DataDonors, and Board Member for Science & Technology @TimeMachineEU