TY - GEN
T1 - Feed
T2 - 40th IEEE International Conference on Data Engineering, ICDE 2024
AU - Qiao, Pengpeng
AU - Zhao, Kangfei
AU - Bi, Bei
AU - Zhang, Zhiwei
AU - Yuan, Ye
AU - Wang, Guoren
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) has become an emerging paradigm via cooperative training models among distributed clients without leaking data privacy. The performance degradation of F1 on heterogeneous data has driven the development of personalized FL (PFL) solutions, where different models are built for individual clients. However, existing PFL approaches often have limited personalization in terms of modeling capability and training strategy. In this paper, we propose a novel PFL solution, Feed, that employs an enhanced shared-private model architecture and equips with a hybrid federated training strategy. Specifically, to model heterogeneous data for different clients, we design an ensemble-based shared encoder that generates an ensemble of embeddings, and a private decoder that adaptively aggregates these embeddings for personalized prediction. In addition, we propose a server-side hybrid federated aggregation strategy to enable effective training of the heterogeneous shared-private model. To prevent personalization degradation in local model updates, we further optimize the personalized local training on the client-side by smoothing the historical encoders. Extensive experiments on MNIST/FEMNIST, CIFARIO/CIFARIOO, and YELP datasets demonstrate that Feed consistently outperforms state-of-the-art approaches.
AB - Federated learning (FL) has become an emerging paradigm via cooperative training models among distributed clients without leaking data privacy. The performance degradation of F1 on heterogeneous data has driven the development of personalized FL (PFL) solutions, where different models are built for individual clients. However, existing PFL approaches often have limited personalization in terms of modeling capability and training strategy. In this paper, we propose a novel PFL solution, Feed, that employs an enhanced shared-private model architecture and equips with a hybrid federated training strategy. Specifically, to model heterogeneous data for different clients, we design an ensemble-based shared encoder that generates an ensemble of embeddings, and a private decoder that adaptively aggregates these embeddings for personalized prediction. In addition, we propose a server-side hybrid federated aggregation strategy to enable effective training of the heterogeneous shared-private model. To prevent personalization degradation in local model updates, we further optimize the personalized local training on the client-side by smoothing the historical encoders. Extensive experiments on MNIST/FEMNIST, CIFARIO/CIFARIOO, and YELP datasets demonstrate that Feed consistently outperforms state-of-the-art approaches.
KW - Federated Learning
KW - Heterogeneity
KW - Personalization
KW - Privacy Preservation
UR - http://www.scopus.com/inward/record.url?scp=85200488745&partnerID=8YFLogxK
U2 - 10.1109/ICDE60146.2024.00144
DO - 10.1109/ICDE60146.2024.00144
M3 - Conference contribution
AN - SCOPUS:85200488745
T3 - Proceedings - International Conference on Data Engineering
SP - 1779
EP - 1791
BT - Proceedings - 2024 IEEE 40th International Conference on Data Engineering, ICDE 2024
PB - IEEE Computer Society
Y2 - 13 May 2024 through 17 May 2024
ER -