Distributed Robust Bandits With Efficient Communication

Ao Wang, Zhida Qin*, Lu Zheng, Dapeng Li*, Lin Gao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The Distributed Multi-Armed Bandit (DMAB) is a powerful framework for studying many network problems. The DMAB is typically studied in a paradigm, where signals activate each agent with a fixed probability, and the rewards revealed to agents are assumed to be generated from fixed and unknown distributions, i.e., stochastic rewards, or arbitrarily manipulated by an adversary, i.e., adversarial rewards. However, this paradigm fails to capture the dynamics and uncertainties of many real-world applications, where the signal that activates an agent, may not follow any distribution, and the rewards might be partially stochastic and partially adversarial. Motivated by this, we study the asynchronously stochastic DMAB problem with adversarial corruptions where the agent is activated arbitrarily, and rewards initially sampled from distributions might be corrupted by an adversary. The objectives are to simultaneously minimize the regret and communication cost, while robust to corruption. To address all these issues, we propose a Robust and Distributed Active Arm Elimination algorithm, namely RDAAE, which only needs to transmit one real number (e.g., an arm index, or a reward) per communication. We theoretically prove that the performance of regret and communication cost smoothly degrades when the corruption level increases.

Original languageEnglish
Pages (from-to)1586-1598
Number of pages13
JournalIEEE Transactions on Network Science and Engineering
Volume10
Issue number3
DOIs
Publication statusPublished - 1 May 2023

Keywords

  • Adversarial corruptions
  • Cooperation
  • Distributed multi-agent bandit (DMAB)
  • Robust learning

Fingerprint

Dive into the research topics of 'Distributed Robust Bandits With Efficient Communication'. Together they form a unique fingerprint.

Cite this