TY - GEN
T1 - Fast, Robust and Interpretable Participant Contribution Estimation for Federated Learning
AU - Wang, Yong
AU - Li, Kaiyu
AU - Luo, Yuyu
AU - Li, Guoliang
AU - Guo, Yunyan
AU - Wang, Zhuo
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In this paper, we introduce CTFL, a fair, robust, and interpretable framework designed to estimate clients' contributions to federated learning, aiming to incentivize high-quality data providers to participate in the federation. Firstly, CTFL can precisely allocate contribution credits in a single pass of model training and inference, ensuring computational efficiency. This is accomplished by tracking the test performance gain brought by each participant through exploiting classification rules. Secondly, CTFL adheres to essential theoretical properties of an ideal contribution estimation algorithm, including symmetry, zero-element, and additivity, ensuring fair and rational estimations. Thirdly, CTFL demonstrates resilience against strategic and malicious behaviors due to carefully crafted micro and macro contribution estimation schemes. Fourthly, CTFL offers insights into participants' roles within the federation by interpreting their contribution scores through respective high-frequently activated rules. Finally, CTFL integrates logical neural networks and model binarization techniques to ensure effectiveness and efficiency while preserving data privacy. Extensive experiments validate that CTFL accurately estimates contributions, significantly reducing computation time by 2-3 orders of magnitude compared to state-of-the-art methods while maintaining robustness.
AB - In this paper, we introduce CTFL, a fair, robust, and interpretable framework designed to estimate clients' contributions to federated learning, aiming to incentivize high-quality data providers to participate in the federation. Firstly, CTFL can precisely allocate contribution credits in a single pass of model training and inference, ensuring computational efficiency. This is accomplished by tracking the test performance gain brought by each participant through exploiting classification rules. Secondly, CTFL adheres to essential theoretical properties of an ideal contribution estimation algorithm, including symmetry, zero-element, and additivity, ensuring fair and rational estimations. Thirdly, CTFL demonstrates resilience against strategic and malicious behaviors due to carefully crafted micro and macro contribution estimation schemes. Fourthly, CTFL offers insights into participants' roles within the federation by interpreting their contribution scores through respective high-frequently activated rules. Finally, CTFL integrates logical neural networks and model binarization techniques to ensure effectiveness and efficiency while preserving data privacy. Extensive experiments validate that CTFL accurately estimates contributions, significantly reducing computation time by 2-3 orders of magnitude compared to state-of-the-art methods while maintaining robustness.
KW - contribution estimation
KW - data valuation
KW - federated learning
KW - interpretable machine learning
UR - https://www.scopus.com/pages/publications/85195656385
U2 - 10.1109/ICDE60146.2024.00182
DO - 10.1109/ICDE60146.2024.00182
M3 - Conference contribution
AN - SCOPUS:85195656385
T3 - Proceedings - International Conference on Data Engineering
SP - 2298
EP - 2311
BT - Proceedings - 2024 IEEE 40th International Conference on Data Engineering, ICDE 2024
PB - IEEE Computer Society
T2 - 40th IEEE International Conference on Data Engineering, ICDE 2024
Y2 - 13 May 2024 through 17 May 2024
ER -