Joseph Robinson, Ph.D.inTowards AIFrom Supervised Learning to Contextual Bandits: The Evolution of AI Decision-MakingExplore dynamic models that learn, adapt, and thrive in ever-changing environments.5d ago
Jonas TimmermanninLyft EngineeringLyft’s Reinforcement Learning PlatformTackling decision making problems with a platform for developing & serving Reinforcement Learning models with a focus on Contextual BanditsMar 12
Netflix Technology BloginNetflix TechBlogRecommending for Long-Term Member Satisfaction at NetflixBy Jiangwei Pan, Gary Tang, Henry Wang, and Justin BasilicoAug 298Aug 298
Ercument IlhaninExpedia Group TechnologyIdentifying Top-Scoring Arms in Ranking Bandits With Linear Payoffs in Real-TimeHow Expedia Group scales up ranking bandit problems with low latencyNov 5Nov 5
Ugur YildiriminTowards Data ScienceAn Overview of Contextual BanditsA dynamic approach to treatment personalizationFeb 21Feb 21
Joseph Robinson, Ph.D.inTowards AIFrom Supervised Learning to Contextual Bandits: The Evolution of AI Decision-MakingExplore dynamic models that learn, adapt, and thrive in ever-changing environments.5d ago
Jonas TimmermanninLyft EngineeringLyft’s Reinforcement Learning PlatformTackling decision making problems with a platform for developing & serving Reinforcement Learning models with a focus on Contextual BanditsMar 12
Netflix Technology BloginNetflix TechBlogRecommending for Long-Term Member Satisfaction at NetflixBy Jiangwei Pan, Gary Tang, Henry Wang, and Justin BasilicoAug 298
Ercument IlhaninExpedia Group TechnologyIdentifying Top-Scoring Arms in Ranking Bandits With Linear Payoffs in Real-TimeHow Expedia Group scales up ranking bandit problems with low latencyNov 5
Ugur YildiriminTowards Data ScienceAn Overview of Contextual BanditsA dynamic approach to treatment personalizationFeb 21
David Vengerovintech-at-instacartUsing Contextual Bandit models in large action spaces at InstacartDavid Vengerov, Vinesh Gudla, Tejaswi Tenneti, Haixun Wang, Kourosh HakhamaneshiJun 15, 2023
Playtika Data & AILEVELING UP GAMING EXPERIENCE WITH MULTI-ARMED BANDIT RECOMMENDER SYSTEMSby Jerome Carayol, Dario D’Andrea and Armand ValsesiaSep 19
Massimiliano CostacurtainTowards Data ScienceDynamic Pricing with Contextual Bandits: Learning by DoingAdding context to your dynamic pricing problem can increase opportunities as well as challengesOct 5, 20232