Balancing Differential Privacy and Utility: A Relevance-Based Adaptive Private Fine-Tuning Framework for Language Models

Naiyu Wang, Shen Wang, Meng Li, Longfei Wu, Zijian Zhang, Zhitao Guan*, Liehuang Zhu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Differential privacy (DP) has been proven to be an effective universal solution for privacy protection in language models. Nevertheless, the introduction of DP incurs significant computational overhead. One promising approach to this challenge is to integrate Parameter Efficient Fine-Tuning (PEFT) with DP, leveraging the memory-efficient characteristics of PEFT to reduce the substantial memory consumption of DP. Given that fine-tuning aims to quickly adapt pretrained models to downstream tasks, it is crucial to balance privacy protection with model utility to avoid excessive performance compromise. In this paper, we propose a Relevance-based Adaptive Private Fine-Tuning (Rap-FT) framework, the first approach designed to mitigate model utility loss caused by DP perturbations in the PEFT context, and to achieve a balance between differential privacy and model utility. Specifically, we introduce an enhanced layer-wise relevance propagation process to analyze the relevance of trainable parameters, which can be adapted to the three major categories of PEFT methods. Based on the relevance map generated, we partition the parameter space dimensionally, and develop an adaptive gradient perturbation strategy that adjusts the noise addition to mitigate the adverse impacts of perturbations. Extensive experimental evaluations are conducted to demonstrate that our Rap-FT framework can improve the utility of the fine-tuned model compared to the baseline differentially private fine-tuning methods, while maintaining a comparable level of privacy protection.

Original languageEnglish
Pages (from-to)207-220
Number of pages14
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
Publication statusPublished - 2025

Keywords

  • Differential privacy
  • language models
  • layer-wise relevance
  • parameter efficient fine-tuning

Cite this