TY - GEN
T1 - Efficient and Privacy-Preserving Ranking-Based Federated Learning
AU - Liu, Tao
AU - Ren, Xuhao
AU - Wang, Yajie
AU - Wu, Huishu
AU - Zhang, Chuan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - Recently, a lot of works have proposed privacy-preserving schemes to address the privacy issues in federated learning (FL). However, FL also faces the challenge of high communication overhead due to limited client resources (e.g., mobile phones and wearable devices), so minimizing the communication between FL servers and clients is necessary. Although some existing works have solved this problem, they mainly focus on reducing the upload communication from client to server, while the entire model is used in the download communication from server to client. In this paper, we propose EPRFL to address this issue. Specifically, the client uses local data to rank the neural network parameters provided by the server, and a voting mechanism and homomorphic encryption are leveraged to aggregate and encrypt the rankings. The server then aggregates the encrypted local rankings. In addition, we use super-increasing sequences to compress and package the local rankings efficiently, further reducing communication costs. Finally, we demonstrate the security of EPRFL through security analysis and its high communication efficiency by experiments.
AB - Recently, a lot of works have proposed privacy-preserving schemes to address the privacy issues in federated learning (FL). However, FL also faces the challenge of high communication overhead due to limited client resources (e.g., mobile phones and wearable devices), so minimizing the communication between FL servers and clients is necessary. Although some existing works have solved this problem, they mainly focus on reducing the upload communication from client to server, while the entire model is used in the download communication from server to client. In this paper, we propose EPRFL to address this issue. Specifically, the client uses local data to rank the neural network parameters provided by the server, and a voting mechanism and homomorphic encryption are leveraged to aggregate and encrypt the rankings. The server then aggregates the encrypted local rankings. In addition, we use super-increasing sequences to compress and package the local rankings efficiently, further reducing communication costs. Finally, we demonstrate the security of EPRFL through security analysis and its high communication efficiency by experiments.
KW - Federated learning
KW - homomorphic encryption
KW - neural network
KW - privacy-preserving
UR - http://www.scopus.com/inward/record.url?scp=85219208034&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-1545-2_20
DO - 10.1007/978-981-96-1545-2_20
M3 - Conference contribution
AN - SCOPUS:85219208034
SN - 9789819615445
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 326
EP - 336
BT - Algorithms and Architectures for Parallel Processing - 24th International Conference, ICA3PP 2024, Proceedings
A2 - Zhu, Tianqing
A2 - Li, Jin
A2 - Castiglione, Aniello
PB - Springer Science and Business Media Deutschland GmbH
T2 - 24th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2024
Y2 - 29 October 2024 through 31 October 2024
ER -