Abstract
In recent years, the multi-armed bandit problem regains popularity especially for the case with covariates since it has new applications in customized services such as personalized medicine. To deal with the bandit problem with covariates, a policy called binned subsample mean comparison that decomposes the original problem into some proper classic bandit problems is introduced. The growth rate in a setting that the reward of each arm depends on observable covariates is studied accordingly. When rewards follow an exponential family, it can be shown that the regret of the proposed method can achieve the nearly optimal growth rate. Simulations show that the proposed policy has the competitive performance compared with other policies.
Original language | English |
---|---|
Pages (from-to) | 402-413 |
Number of pages | 12 |
Journal | Journal of Statistical Planning and Inference |
Volume | 211 |
DOIs | |
Publication status | Published - Mar 2021 |
Keywords
- Efficient policy
- Multi-armed bandit problem
- Nonparametric solution
- Subsample comparisons