Privacy-Aware Federated Fine-Tuning of Large Pretrained Models With Just Forward Propagation

  • Ke Xing
  • , Yanjie Dong*
  • , Xiping Hu*
  • , Victor C.M. Leung
  • , M. Jamal Deen
  • , Song Guo
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

With the extraordinary success of generative artificial intelligence, large pretrained models (LPMs) have been widely used to achieve human-level performance. Despite the one-shot capability, it is always preferred to fine-tune the LPMs for domain-specific downstream tasks. Therefore, the federated learning system is leveraged to fine-tune the large pretrained models enabling concurrrently use multiple distributed clients as well as their local datasets. While the first-order fine-tuning methods suffer from high computational and memory costs due to the backward propagation, we are motivated to propose a federated zeroth-order fine-tuning method with only forward propagation. Moreover, we also leverage differential privacy to further preserve the data privacy of local clients. Experimental results illustrate that our proposed federated zeroth-order method can reduce the memory and retain a similar testing accuracy over the state-of-the-art benchmarks.

Keywords

  • differential privacy
  • Federated learning
  • parameter fine-tuning
  • zeroth-order optimization

Fingerprint

Dive into the research topics of 'Privacy-Aware Federated Fine-Tuning of Large Pretrained Models With Just Forward Propagation'. Together they form a unique fingerprint.

Cite this