Chiron: A Robustness-Aware Incentive Scheme for Edge Learning via Hierarchical Reinforcement Learning

Yi Liu, Song Guo*, Yufeng Zhan*, Leijie Wu, Zicong Hong, Qihua Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Over the past few years, edge learning has achieved significant success in mobile edge networks. Few works have designed incentive mechanism that motivates edge nodes to participate in edge learning. However, most existing works only consider myopic optimization and assume that all edge nodes are honest, which lacks long-term sustainability and the final performance assurance. In this paper, we propose Chiron, an incentive-driven Byzantine-resistant long-term mechanism based on hierarchical reinforcement learning (HRL). First, our optimization goal includes both learning-algorithm performance criteria (i.e., global accuracy) and systematical criteria (i.e., resource consumption), which aim to improve the edge learning performance under a given resource budget. Second, we propose a three-layer HRL architecture to handle long-term optimization, short-term optimization, and byzantine resistance, respectively. Finally, we conduct experiments on various edge learning tasks to demonstrate the superiority of the proposed approach. Specifically, our system can successfully exclude malicious nodes and lazy nodes out of the edge learning participation and achieves 14.96% higher accuracy and 12.66% higher total utility than the state-of-the-art methods under the same budget limit.

Original languageEnglish
Pages (from-to)8508-8524
Number of pages17
JournalIEEE Transactions on Mobile Computing
Volume23
Issue number8
DOIs
Publication statusPublished - 2024

Keywords

  • Deep reinforcement learning
  • edge learning
  • incentive mechanism
  • mobile edge computing

Fingerprint

Dive into the research topics of 'Chiron: A Robustness-Aware Incentive Scheme for Edge Learning via Hierarchical Reinforcement Learning'. Together they form a unique fingerprint.

Cite this