Fractal Augmented Pre-training and Gaussian Virtual Feature Calibration for Tackling Data Heterogeneity in Federated Learning

Yan Zheng, Yanlong Zhai*, Yanglin Liu, You Li

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Federated learning (FL) enables collaborative model training across multiple clients while preserving privacy. In practical situations, the heterogeneous and unbalanced distribution of the data has a significant impact on the performance of the model. Although some work has been carried out to address this issue, such as adding regularization terms, employing specific server aggregation strategies, and utilizing deep generative models to augment the training data, there is still a lack of efficient approaches to derive intrinsic representation of the local data to improve the global model without compromising client privacy. Through our careful observations and analysis, we found that incorporating pre-training and calibrating of the global model using virtual data and virtual features that are generated based on the client data distribution can improve model generalization. In this work, we propose Virtual Data Augmented Federated Learning (FedVDA) to resolve this problem. Specifically, FedVDA combines unsupervised pre-training with Augmented Fractal (AF) virtual images and Gaussian Mixture Model (GMM) virtual feature calibration. By integrating color tone transformations into the virtual data generated by fractals, we bridge the gap between virtual and client data distributions. Multi-modal feature modeling using variances on each client allows the server to efficiently calibrate the classifier with balanced sampled virtual features, reducing both computational and communication overhead. Compared to other data augmentation methods, our method directly calibrates model features, significantly improving model performance in scenarios with data heterogeneity and imbalance, while minimizing additional computational and communication costs. Our experiments demonstrate that FedVDA outperforms existing federated learning methods and can seamlessly integrate with other algorithms.

源语言英语
主期刊名2024 International Joint Conference on Neural Networks, IJCNN 2024 - Proceedings
出版商Institute of Electrical and Electronics Engineers Inc.
ISBN(电子版)9798350359312
DOI
出版状态已出版 - 2024
活动2024 International Joint Conference on Neural Networks, IJCNN 2024 - Yokohama, 日本
期限: 30 6月 20245 7月 2024

出版系列

姓名Proceedings of the International Joint Conference on Neural Networks

会议

会议2024 International Joint Conference on Neural Networks, IJCNN 2024
国家/地区日本
Yokohama
时期30/06/245/07/24

指纹

探究 'Fractal Augmented Pre-training and Gaussian Virtual Feature Calibration for Tackling Data Heterogeneity in Federated Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此