Optimal Myopic Policy for Restless Bandit: A Perspective of Eigendecomposition

Kehao Wang, Jihong Yu*, Lin Chen, Pan Zhou, Moe Z. Win

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

This paper considers restless multiarmed bandit (RMAB) problem involving partially observed Markov decision processes with heterogeneous state transition matrices in discrete time slots. Due to the notorious PSPACE-hardness of the generic RMAB, we turn to the myopic policy, i.e., scheduling the best processes each time, since the myopic policy is one of the policies with linear computation complexity and popularly adopted in practice. Intuitively, the myopic policy is not optimal because it only exploits information without exploring. In this paper, to show the optimality of the myopic policy, we first generalize the concept of the first order stochastic dominance ordering and then decompose each state transition matrix into a weighted sum of a series of eigenmatrices in which the weights are determined by the corresponding eigenvalues. Furthermore, we identify two sets of sufficient conditions on both eigenvalues and eigenmatrices to guarantee the optimality of the myopic policy for two instances, respectively. The theorectical results provide some insights into the structure of the optimal policy for the generic RMAB.

源语言英语
页(从-至)420-433
页数14
期刊IEEE Journal on Selected Topics in Signal Processing
16
3
DOI
出版状态已出版 - 1 4月 2022

指纹

探究 'Optimal Myopic Policy for Restless Bandit: A Perspective of Eigendecomposition' 的科研主题。它们共同构成独一无二的指纹。

引用此