Published indata from the trenchesOpenAL: Evaluation and Interpretation of Active Learning StrategiesWe are pleased to present our work accepted at the NeurIPS 2022 Workshop on Human in the Loop Learning! For a complete overview of our…Dec 5, 2022Dec 5, 2022
Published indata from the trenchesTowards Efficient Labeling in Federated LearningFederated Learning (FL) can enable privacy-preserving distributed computation across several clients. It is used to federate knowledge…Nov 10, 20222Nov 10, 20222
Published indata from the trenchesDiversity in Outcome Optimization of ML ModelsML outcome optimization is the process of finding optimal feature values that give the model prediction minimum (or maximum) over a defined…Jun 2, 2022Jun 2, 2022
Published indata from the trenchesI Can’t Believe It’s Not Better — Active Learning FlavorThis is the story of a research project that didn’t quite make it. We introduce a new active learning strategy and put it to the test.Jun 10, 20211Jun 10, 20211
Published indata from the trenchesA (Slightly) Better Budget Allocation for HyperbandRounding operations can lead Hyperband not to use 7% of the available budget. We propose a method that reduces unused budget to 3%.Apr 30, 2020Apr 30, 2020
Published indata from the trenchesDiverse Mini-Batch Active Learning: A Reproduction ExerciseLessons learned from reproducing “Diverse Mini-Batch Active Learning”, a strategy mixing uncertainty and diversity techniques.Mar 12, 20204Mar 12, 20204
Published indata from the trenchesA Proactive Look at Active Learning PackagesIntroduction to Active Learning through a quick benchmark of major Python packages: modAL, libact, and alipy.Feb 20, 2020Feb 20, 2020