Abstract
<italic>Federated Learning (FL)</italic> has achieved state-of-the-art performance in training a global model in a decentralized and privacy-preserving manner. Many recent works have demonstrated that incentive mechanism is of paramount importance for the success of FL. Existing incentives to FL either neglect communication efficiency, or consider communication efficiency but design the incentive mechanisms using non-cooperative games under complete information assumption, or study incentive mechanism under incomplete information but only apply to the sequential interaction setting. We shed light on this problem from the cooperative perspective and propose an incentive mechanism for communication-efficient FL based on the Nash bargaining theory. Specially, we formulate our incentive mechanism as a one-to-many <italic>concurrent bargaining</italic> game among the aggregator and clients, and systematically analyze the Nash bargaining solution (NBS, game equilibrium) to design the incentive mechanism. It should be noted that the existing <italic>sequential bargaining</italic> is not suitable for incentivizing FL due to high (exponential) time complexity, which deteriorates the straggler problem in FL. Our formulated bargaining game is challenging due to the NP-hardness. We propose a probabilistic greedy-based client selection algorithm and derive an analytical payment solution as an approximate NBS. We prove the convergence guarantee of our incentive mechanism for communication-efficient FL. Finally, we conduct experiments over real-world datasets to evaluate the performance of our incentive mechanism.
Original language | English |
---|---|
Pages (from-to) | 1-16 |
Number of pages | 16 |
Journal | IEEE Transactions on Mobile Computing |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- Bargaining
- Communication Efficiency
- Computational modeling
- Convergence
- Costs
- Federated Learning
- Games
- Incentive Mechanism
- Mobile computing
- NIST
- Training