Accurate differentially private deep learning on the edge

Rui Han, Dong Li, Junyan Ouyang, Chi Harold Liu*, Guoren Wang, Dapeng Wu, Lydia Y. Chen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

15 引用 (Scopus)

摘要

Deep learning (DL) models are increasingly built on federated edge participants holding local data. To enable insight extractions without the risk of information leakage, DL training is usually combined with differential privacy (DP). The core theme is to tradeoff learning accuracy by adding statistically calibrated noises, particularly to local gradients of edge learners, during model training. However, this privacy guarantee unfortunately degrades model accuracy due to edge learners' local noises, and the global noise aggregated at the central server. Existing DP frameworks for edge focus on local noise calibration via gradient clipping techniques, overlooking the heterogeneity and dynamic changes of local gradients, and their aggregated impact on accuracy. In this article, we present a systematical analysis that unveils the influential factors capable of mitigating local and aggregated noises, and design PrivateDL to leverage these factors in noise calibration so as to improve model accuracy while fulfilling privacy guarantee. PrivateDL features on: (i) sampling-based sensitivity estimation for local noise calibration and (ii) combining large batch sizes and critical data identification in global training. We implement PrivateDL on the popular Laplace/Gaussian DP mechanisms and demonstrate its effectiveness using Intel BigDL workloads, i.e., considerably improving model accuracy by up to 5X when comparing against existing DP frameworks.

源语言英语
文章编号9372811
页(从-至)2231-2247
页数17
期刊IEEE Transactions on Parallel and Distributed Systems
32
9
DOI
出版状态已出版 - 1 9月 2021

指纹

探究 'Accurate differentially private deep learning on the edge' 的科研主题。它们共同构成独一无二的指纹。

引用此