Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning

  • Tianyu Qi
  • , Lei Xue*
  • , Yufeng Zhan
  • , Xiaobo Ma
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The growing adoption of large pre-trained models in edge computing has made deploying model inference on mobile clients both practical and popular. These devices are inherently vulnerable to direct adversarial attacks, which pose a substantial threat to the robustness and security of deployed models. Federated adversarial training (FAT) has emerged as an effective solution to enhance model robustness while preserving client privacy. However, FAT frequently produces a generalized global model, which struggles to address the diverse and heterogeneous data distributions across clients, resulting in insufficiently personalized performance, while also encountering substantial communication challenges during the training process. In this paper, we propose Sylva, a personalized collaborative adversarial training framework designed to deliver customized defense models for each client through a two-phase process. In Phase 1, Sylva employs LoRA for local adversarial fine-tuning, enabling clients to personalize model robustness while drastically reducing communication costs by uploading only LoRA parameters during federated aggregation. In Phase 2, a game-based layer selection strategy is introduced to enhance accuracy on benign data, further refining the personalized model. This approach ensures that each client receives a tailored defense model that balances robustness and accuracy effectively. Extensive experiments on benchmark datasets demonstrate that Sylva can achieve up to 50× improvements in communication efficiency compared to state-of-the-art algorithms, while achieving up to 29.5% and 50.4% enhancements in adversarial robustness and benign accuracy, respectively.

Original languageEnglish
Title of host publicationCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages1679-1693
Number of pages15
ISBN (Electronic)9798400715259
DOIs
Publication statusPublished - 22 Nov 2025
Event32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025 - Taipei, Taiwan, Province of China
Duration: 13 Oct 202517 Oct 2025

Publication series

NameCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security

Conference

Conference32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
Country/TerritoryTaiwan, Province of China
CityTaipei
Period13/10/2517/10/25

Keywords

  • Adversarial training
  • Fine-tuning
  • Personalized federated learning
  • Pre-trained models

Fingerprint

Dive into the research topics of 'Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning'. Together they form a unique fingerprint.

Cite this