TY - JOUR
T1 - EntroCFL
T2 - Entropy-Based Clustered Federated Learning With Incentive Mechanism
AU - Tu, Kaifei
AU - Wang, Xuehe
AU - Hu, Xiping
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) emerged as a machine learning approach in situations where the privacy of sensitive data needs to be protected. Within the FL framework, clients collaborate to train a shared global model using their individual data by sending the model parameters to a central server, all while keeping their private data localized. Albeit its advantage, FL still faces certain limitations, such as clients may lack the motivation to participate in training, and the heterogeneous data distribution among clients can slow down the model convergence rate and degrade the model accuracy. In the light of the above mentioned considerations, we introduce entropy-based clustered FL (EntroCFL) with incentive mechanism, a two-layer clustered FL (CFL) model to jointly address the incentive mechanism and model training performance issues with heterogeneous clients. In Layer I, the server designs the payments to the clients to minimize its cost, including the training accuracy loss and the payment to clients, based on which the clients determine their training datasizes to maximize their own utilities. In Layer II, we introduce an entropy-based clustering method that is implemented based on the clients' strategies in Layer I. Unlike conventional CFL methods that rely solely on the cosine similarity between clients' parameter gradients, EntroCFL introduces a novel clustering discriminant which takes both angle and magnitude of clients' parameter gradients into consideration. Simulation experiments are conducted to compare EntroCFL with conventional methods, such as FedAvg on the MNIST, EMNIST, and FMNIST datasets. The results validate the superiority of EntroCFL in terms of experimental accuracy, robustness, and economic efficiency.
AB - Federated learning (FL) emerged as a machine learning approach in situations where the privacy of sensitive data needs to be protected. Within the FL framework, clients collaborate to train a shared global model using their individual data by sending the model parameters to a central server, all while keeping their private data localized. Albeit its advantage, FL still faces certain limitations, such as clients may lack the motivation to participate in training, and the heterogeneous data distribution among clients can slow down the model convergence rate and degrade the model accuracy. In the light of the above mentioned considerations, we introduce entropy-based clustered FL (EntroCFL) with incentive mechanism, a two-layer clustered FL (CFL) model to jointly address the incentive mechanism and model training performance issues with heterogeneous clients. In Layer I, the server designs the payments to the clients to minimize its cost, including the training accuracy loss and the payment to clients, based on which the clients determine their training datasizes to maximize their own utilities. In Layer II, we introduce an entropy-based clustering method that is implemented based on the clients' strategies in Layer I. Unlike conventional CFL methods that rely solely on the cosine similarity between clients' parameter gradients, EntroCFL introduces a novel clustering discriminant which takes both angle and magnitude of clients' parameter gradients into consideration. Simulation experiments are conducted to compare EntroCFL with conventional methods, such as FedAvg on the MNIST, EMNIST, and FMNIST datasets. The results validate the superiority of EntroCFL in terms of experimental accuracy, robustness, and economic efficiency.
KW - Clustered federated learning (CFL)
KW - entropy
KW - incentive mechanism
UR - http://www.scopus.com/inward/record.url?scp=86000378540&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2024.3472017
DO - 10.1109/JIOT.2024.3472017
M3 - Article
AN - SCOPUS:86000378540
SN - 2327-4662
VL - 12
SP - 986
EP - 1001
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 1
ER -