AI SageScribeDynamic Traffic Allocation in Multi-Armed Bandit Experiments: Optimizing for Regret MinimizationIn the world of A/B testing and experimentation, dynamically adjusting traffic allocation is a crucial factor for optimizing outcomes and…Sep 8
Irene ChanginTowards Data ScienceNow, why should we care about Recommendation Systems…? ft. A soft introduction to Thompson SamplingAn ongoing Recommendation System seriesNov 7, 20232
Samuele MazzantiinTowards Data ScienceWhen You Should Prefer “Thompson Sampling” Over A/B TestsAn in-depth explanation of “Thompson Sampling”, a more efficient alternative to A/B testing for online learningJun 13, 20236Jun 13, 20236
RakeshbobbatiMaximising Outcomes in Data Science Model testing with Multi-Armed Bandit ApproachesDifferences Between A/B Testing and Multi-Armed Bandit TestingJul 22Jul 22
Yuki MinaiExploring Multi-Armed Bandit Problem: Epsilon-Greedy, Epsilon-Decreasing, UCB, and Thompson…To tackle the multi-armed bandit problem, we will learn well-established algorithms such as Greedy algorithm, UCB, and Thompson SamplingNov 20, 2023Nov 20, 2023
AI SageScribeDynamic Traffic Allocation in Multi-Armed Bandit Experiments: Optimizing for Regret MinimizationIn the world of A/B testing and experimentation, dynamically adjusting traffic allocation is a crucial factor for optimizing outcomes and…Sep 8
Irene ChanginTowards Data ScienceNow, why should we care about Recommendation Systems…? ft. A soft introduction to Thompson SamplingAn ongoing Recommendation System seriesNov 7, 20232
Samuele MazzantiinTowards Data ScienceWhen You Should Prefer “Thompson Sampling” Over A/B TestsAn in-depth explanation of “Thompson Sampling”, a more efficient alternative to A/B testing for online learningJun 13, 20236
RakeshbobbatiMaximising Outcomes in Data Science Model testing with Multi-Armed Bandit ApproachesDifferences Between A/B Testing and Multi-Armed Bandit TestingJul 22
Yuki MinaiExploring Multi-Armed Bandit Problem: Epsilon-Greedy, Epsilon-Decreasing, UCB, and Thompson…To tackle the multi-armed bandit problem, we will learn well-established algorithms such as Greedy algorithm, UCB, and Thompson SamplingNov 20, 2023
Ashish Ranjan KarnThompson Sampling — Python ImplementationThompson Sampling is a popular probabilistic algorithm used in decision-making under uncertainty, particularly in the context of…Jul 20, 20231
Viet VoApplication of Multi-Armed Bandits to Promotion Ranking in MoMoMulti-Armed BanditsAug 26, 2023