TY - JOUR
T1 - Balancing Differential Privacy and Utility
T2 - A Relevance-Based Adaptive Private Fine-Tuning Framework for Language Models
AU - Wang, Naiyu
AU - Wang, Shen
AU - Li, Meng
AU - Wu, Longfei
AU - Zhang, Zijian
AU - Guan, Zhitao
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Differential privacy (DP) has been proven to be an effective universal solution for privacy protection in language models. Nevertheless, the introduction of DP incurs significant computational overhead. One promising approach to this challenge is to integrate Parameter Efficient Fine-Tuning (PEFT) with DP, leveraging the memory-efficient characteristics of PEFT to reduce the substantial memory consumption of DP. Given that fine-tuning aims to quickly adapt pretrained models to downstream tasks, it is crucial to balance privacy protection with model utility to avoid excessive performance compromise. In this paper, we propose a Relevance-based Adaptive Private Fine-Tuning (Rap-FT) framework, the first approach designed to mitigate model utility loss caused by DP perturbations in the PEFT context, and to achieve a balance between differential privacy and model utility. Specifically, we introduce an enhanced layer-wise relevance propagation process to analyze the relevance of trainable parameters, which can be adapted to the three major categories of PEFT methods. Based on the relevance map generated, we partition the parameter space dimensionally, and develop an adaptive gradient perturbation strategy that adjusts the noise addition to mitigate the adverse impacts of perturbations. Extensive experimental evaluations are conducted to demonstrate that our Rap-FT framework can improve the utility of the fine-tuned model compared to the baseline differentially private fine-tuning methods, while maintaining a comparable level of privacy protection.
AB - Differential privacy (DP) has been proven to be an effective universal solution for privacy protection in language models. Nevertheless, the introduction of DP incurs significant computational overhead. One promising approach to this challenge is to integrate Parameter Efficient Fine-Tuning (PEFT) with DP, leveraging the memory-efficient characteristics of PEFT to reduce the substantial memory consumption of DP. Given that fine-tuning aims to quickly adapt pretrained models to downstream tasks, it is crucial to balance privacy protection with model utility to avoid excessive performance compromise. In this paper, we propose a Relevance-based Adaptive Private Fine-Tuning (Rap-FT) framework, the first approach designed to mitigate model utility loss caused by DP perturbations in the PEFT context, and to achieve a balance between differential privacy and model utility. Specifically, we introduce an enhanced layer-wise relevance propagation process to analyze the relevance of trainable parameters, which can be adapted to the three major categories of PEFT methods. Based on the relevance map generated, we partition the parameter space dimensionally, and develop an adaptive gradient perturbation strategy that adjusts the noise addition to mitigate the adverse impacts of perturbations. Extensive experimental evaluations are conducted to demonstrate that our Rap-FT framework can improve the utility of the fine-tuned model compared to the baseline differentially private fine-tuning methods, while maintaining a comparable level of privacy protection.
KW - Differential privacy
KW - language models
KW - layer-wise relevance
KW - parameter efficient fine-tuning
UR - http://www.scopus.com/inward/record.url?scp=85212317023&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2024.3516579
DO - 10.1109/TIFS.2024.3516579
M3 - Article
AN - SCOPUS:85212317023
SN - 1556-6013
VL - 20
SP - 207
EP - 220
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -