TY - JOUR
T1 - Privacy-Aware Federated Fine-Tuning of Large Pretrained Models With Just Forward Propagation
AU - Xing, Ke
AU - Dong, Yanjie
AU - Hu, Xiping
AU - Leung, Victor C.M.
AU - Deen, M. Jamal
AU - Guo, Song
N1 - Publisher Copyright:
©2025 IEEE.
PY - 2025
Y1 - 2025
N2 - With the extraordinary success of generative artificial intelligence, large pretrained models (LPMs) have been widely used to achieve human-level performance. Despite the one-shot capability, it is always preferred to fine-tune the LPMs for domain-specific downstream tasks. Therefore, the federated learning system is leveraged to fine-tune the large pretrained models enabling concurrrently use multiple distributed clients as well as their local datasets. While the first-order fine-tuning methods suffer from high computational and memory costs due to the backward propagation, we are motivated to propose a federated zeroth-order fine-tuning method with only forward propagation. Moreover, we also leverage differential privacy to further preserve the data privacy of local clients. Experimental results illustrate that our proposed federated zeroth-order method can reduce the memory and retain a similar testing accuracy over the state-of-the-art benchmarks.
AB - With the extraordinary success of generative artificial intelligence, large pretrained models (LPMs) have been widely used to achieve human-level performance. Despite the one-shot capability, it is always preferred to fine-tune the LPMs for domain-specific downstream tasks. Therefore, the federated learning system is leveraged to fine-tune the large pretrained models enabling concurrrently use multiple distributed clients as well as their local datasets. While the first-order fine-tuning methods suffer from high computational and memory costs due to the backward propagation, we are motivated to propose a federated zeroth-order fine-tuning method with only forward propagation. Moreover, we also leverage differential privacy to further preserve the data privacy of local clients. Experimental results illustrate that our proposed federated zeroth-order method can reduce the memory and retain a similar testing accuracy over the state-of-the-art benchmarks.
KW - differential privacy
KW - Federated learning
KW - parameter fine-tuning
KW - zeroth-order optimization
UR - https://www.scopus.com/pages/publications/105009593344
U2 - 10.1109/ICASSP49660.2025.10889811
DO - 10.1109/ICASSP49660.2025.10889811
M3 - Conference article
AN - SCOPUS:105009593344
SN - 0736-7791
JO - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
JF - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
T2 - 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025
Y2 - 6 April 2025 through 11 April 2025
ER -