TY - GEN
T1 - Sylva
T2 - 32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
AU - Qi, Tianyu
AU - Xue, Lei
AU - Zhan, Yufeng
AU - Ma, Xiaobo
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/11/22
Y1 - 2025/11/22
N2 - The growing adoption of large pre-trained models in edge computing has made deploying model inference on mobile clients both practical and popular. These devices are inherently vulnerable to direct adversarial attacks, which pose a substantial threat to the robustness and security of deployed models. Federated adversarial training (FAT) has emerged as an effective solution to enhance model robustness while preserving client privacy. However, FAT frequently produces a generalized global model, which struggles to address the diverse and heterogeneous data distributions across clients, resulting in insufficiently personalized performance, while also encountering substantial communication challenges during the training process. In this paper, we propose Sylva, a personalized collaborative adversarial training framework designed to deliver customized defense models for each client through a two-phase process. In Phase 1, Sylva employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness while drastically reducing communication costs by uploading only LoRA parameters during federated aggregation. In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data, further refining the personalized model. This approach ensures that each client receives a tailored defense model that balances robustness and accuracy effectively. Extensive experiments on benchmark datasets demonstrate that Sylva can achieve up to 50× improvements in communication efficiency compared to state-of-the-art algorithms, while achieving up to 29.5% and 50.4% enhancements in adversarial robustness and benign accuracy, respectively.
AB - The growing adoption of large pre-trained models in edge computing has made deploying model inference on mobile clients both practical and popular. These devices are inherently vulnerable to direct adversarial attacks, which pose a substantial threat to the robustness and security of deployed models. Federated adversarial training (FAT) has emerged as an effective solution to enhance model robustness while preserving client privacy. However, FAT frequently produces a generalized global model, which struggles to address the diverse and heterogeneous data distributions across clients, resulting in insufficiently personalized performance, while also encountering substantial communication challenges during the training process. In this paper, we propose Sylva, a personalized collaborative adversarial training framework designed to deliver customized defense models for each client through a two-phase process. In Phase 1, Sylva employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness while drastically reducing communication costs by uploading only LoRA parameters during federated aggregation. In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data, further refining the personalized model. This approach ensures that each client receives a tailored defense model that balances robustness and accuracy effectively. Extensive experiments on benchmark datasets demonstrate that Sylva can achieve up to 50× improvements in communication efficiency compared to state-of-the-art algorithms, while achieving up to 29.5% and 50.4% enhancements in adversarial robustness and benign accuracy, respectively.
KW - Adversarial training
KW - Fine-tuning
KW - Personalized federated learning
KW - Pre-trained models
UR - https://www.scopus.com/pages/publications/105023877472
U2 - 10.1145/3719027.3744805
DO - 10.1145/3719027.3744805
M3 - Conference contribution
AN - SCOPUS:105023877472
T3 - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
SP - 1679
EP - 1693
BT - CCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery, Inc
Y2 - 13 October 2025 through 17 October 2025
ER -