This blog post highlights a minor discrepancy in the Hyperband method. It does not question the validity of the method nor its results. The spotted deviation leads Hyperband to underuse its allocated budget, so this may result in faster computation but degraded performance.
Edit: We received an answer from the Ray Tune development team about this post, check it out at the end of the post or just search for “Addendum”.
Hyperband: A novel bandit-based approach to hyperparameter optimization. — Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A — JMLR 2018
In our previous blog post, A Proactive Look at Active Learning Packages, we gave an overview of the most basic methods of active learning together with the most common Python packages as well as the more advanced methods they implement.
For this second post, we’ll have a look at more recent approaches and seize this opportunity to try to reproduce the results from the 2019 paper Diverse Mini-Batch Active Learning. For a fully detailed description of all the methods, we refer the reader directly to the article.
Our previous blog post introduced uncertainty-based methods that use class prediction probabilities to…
If data acquisition is the very first step of any machine learning project, unfortunately, data doesn’t always come with the label of interest for the task at hand. It is often a very cumbersome task to (correctly!) label the data — and that’s where Active Learning comes in.
Instead of randomly labeling samples, active learning strategies strive at selecting the next most useful sample to label. …