TY - JOUR
T1 - Differential Private Stochastic Optimization with Heavy-tailed Data
T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
AU - Zhao, Puning
AU - Wu, Jiafei
AU - Liu, Zhe
AU - Wang, Chong
AU - Fan, Rongfei
AU - Li, Qingming
N1 - Publisher Copyright:
Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - We study convex optimization problems under differential privacy (DP). With heavy-tailed gradients, existing works achieve suboptimal rates. The main obstacle is that existing gradient estimators have suboptimal tail properties, resulting in a superfluous factor of d in the union bound. In this paper, we explore algorithms achieving optimal rates of DP optimization with heavy-tailed gradients. Our first method is a simple clipping approach. Under bounded p-th order moments of gradients, with n samples, it achieves Õ(pd/n + √d(√d/nϵ)1−1/p) population risk with ϵ ≤ 1/√d. We then propose an iterative updating method, which is more complex but achieves this rate for all ϵ ≤ 1. The results significantly improve over existing methods. Such improvement relies on a careful treatment of the tail behavior of gradient estimators. Our results match the minimax lower bound, indicating that the theoretical limit of stochastic convex optimization under DP is achievable.
AB - We study convex optimization problems under differential privacy (DP). With heavy-tailed gradients, existing works achieve suboptimal rates. The main obstacle is that existing gradient estimators have suboptimal tail properties, resulting in a superfluous factor of d in the union bound. In this paper, we explore algorithms achieving optimal rates of DP optimization with heavy-tailed gradients. Our first method is a simple clipping approach. Under bounded p-th order moments of gradients, with n samples, it achieves Õ(pd/n + √d(√d/nϵ)1−1/p) population risk with ϵ ≤ 1/√d. We then propose an iterative updating method, which is more complex but achieves this rate for all ϵ ≤ 1. The results significantly improve over existing methods. Such improvement relies on a careful treatment of the tail behavior of gradient estimators. Our results match the minimax lower bound, indicating that the theoretical limit of stochastic convex optimization under DP is achievable.
UR - http://www.scopus.com/inward/record.url?scp=105003998106&partnerID=8YFLogxK
U2 - 10.1609/aaai.v39i21.34440
DO - 10.1609/aaai.v39i21.34440
M3 - Conference article
AN - SCOPUS:105003998106
SN - 2159-5399
VL - 39
SP - 22795
EP - 22803
JO - Proceedings of the AAAI Conference on Artificial Intelligence
JF - Proceedings of the AAAI Conference on Artificial Intelligence
IS - 21
Y2 - 25 February 2025 through 4 March 2025
ER -