TY - JOUR
T1 - Contextual Bandits for Unbounded Context Distributions
AU - Zhao, Puning
AU - Fan, Rongfei
AU - Wang, Shaowei
AU - Shen, Li
AU - Zhang, Qixin
AU - Ke, Zong
AU - Zheng, Tianhang
N1 - Publisher Copyright:
© 2025 by the author(s).
PY - 2025
Y1 - 2025
N2 - Nonparametric contextual bandit is an important model of sequential decision making problems. Under α-Tsybakov margin condition, existing research has established a regret bound of (formula presented) for bounded supports. However, the optimal regret with unbounded contexts has not been analyzed. The challenge of solving contextual bandit problems with unbounded support is to achieve both exploration-exploitation tradeoff and bias-variance tradeoff simultaneously. In this paper, we solve the nonparametric contextual bandit problem with unbounded contexts. We propose two nearest neighbor methods combined with UCB exploration. The first method uses a fixed k. Our analysis shows that this method achieves minimax optimal regret under a weak margin condition and relatively light-tailed context distributions. The second method uses adaptive k. By a proper data-driven selection of k, this method achieves an expected regret of (formula presenetd), in which β is a parameter describing the tail strength. This bound matches the minimax lower bound up to logarithm factors, indicating that the second method is approximately optimal.
AB - Nonparametric contextual bandit is an important model of sequential decision making problems. Under α-Tsybakov margin condition, existing research has established a regret bound of (formula presented) for bounded supports. However, the optimal regret with unbounded contexts has not been analyzed. The challenge of solving contextual bandit problems with unbounded support is to achieve both exploration-exploitation tradeoff and bias-variance tradeoff simultaneously. In this paper, we solve the nonparametric contextual bandit problem with unbounded contexts. We propose two nearest neighbor methods combined with UCB exploration. The first method uses a fixed k. Our analysis shows that this method achieves minimax optimal regret under a weak margin condition and relatively light-tailed context distributions. The second method uses adaptive k. By a proper data-driven selection of k, this method achieves an expected regret of (formula presenetd), in which β is a parameter describing the tail strength. This bound matches the minimax lower bound up to logarithm factors, indicating that the second method is approximately optimal.
UR - https://www.scopus.com/pages/publications/105023582505
M3 - Conference article
AN - SCOPUS:105023582505
SN - 2640-3498
VL - 267
SP - 77487
EP - 77520
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 42nd International Conference on Machine Learning, ICML 2025
Y2 - 13 July 2025 through 19 July 2025
ER -