DS1 spectrogram: Taming Non-stationary Bandits: A Bayesian Approach

Taming Non-stationary Bandits: A Bayesian Approach

July 31, 20171707.09727

Authors

Vishnu Raj,Sheetal Kalyani

Abstract

We consider the multi armed bandit problem in non-stationary environments. Based on the Bayesian method, we propose a variant of Thompson Sampling which can be used in both rested and restless bandit scenarios.

Applying discounting to the parameters of prior distribution, we describe a way to systematically reduce the effect of past observations. Further, we derive the exact expression for the probability of picking sub-optimal arms.

By increasing the exploitative value of Bayes' samples, we also provide an optimistic version of the algorithm. Extensive empirical analysis is conducted under various scenarios to validate the utility of proposed algorithms.

A comparison study with various state-of-the-arm algorithms is also included.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.