摘要
In recent years, the multi-armed bandit problem regains popularity especially for the case with covariates since it has new applications in customized services such as personalized medicine. To deal with the bandit problem with covariates, a policy called binned subsample mean comparison that decomposes the original problem into some proper classic bandit problems is introduced. The growth rate in a setting that the reward of each arm depends on observable covariates is studied accordingly. When rewards follow an exponential family, it can be shown that the regret of the proposed method can achieve the nearly optimal growth rate. Simulations show that the proposed policy has the competitive performance compared with other policies.
源语言 | 英语 |
---|---|
页(从-至) | 402-413 |
页数 | 12 |
期刊 | Journal of Statistical Planning and Inference |
卷 | 211 |
DOI | |
出版状态 | 已出版 - 3月 2021 |