Differential Privacy in Distributed Optimization with Gradient Tracking

Lingying Huang, Junfeng Wu, Dawei Shi, Subhrakanti Dey, Ling Shi

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

In recent years, there has been a growing interest in distributed optimization, which collaboratively attains an optimum by exchanging information with neighbours. Among the various distributed algorithms available, optimization with gradient tracking is particularly notable for its superior convergence results, especially in the context of directed graphs. However, privacy concerns arise when gradient information is transmitted directly which would induce more information leakage. Surprisingly, the literature has not adequately addressed the associated privacy issues. In response to the gap, our paper proposes a privacy-preserving distributed optimization algorithm with gradient tracking by adding noises to transmitted messages, namely, the decision variables and the estimate of the aggregated gradient. We prove two dilemmas for this kind of algorithm. In the first dilemma, we reveal that this distributed optimization algorithm with gradient tracking cannot achieve <inline-formula><tex-math notation="LaTeX">$ \epsilon$</tex-math></inline-formula>-differential privacy (DP) and exact convergence simultaneously. Building on this, we subsequently highlight that the algorithm fails to achieve <inline-formula><tex-math notation="LaTeX">$ \epsilon$</tex-math></inline-formula>-DP when employing non-summable stepsizes in the presence of Laplace noises. It is crucial to emphasize that these findings hold true regardless of the size of the privacy metric&#x00A0;<inline-formula><tex-math notation="LaTeX">$ \epsilon$</tex-math></inline-formula>. After that, we rigorously analyse the convergence performance and privacy level given summable stepsize sequences under Laplace distribution since it is only with summable stepsizes that is meaningful for us to study. We derive sufficient conditions that allow for the simultaneous stochastically bounded accuracy and <inline-formula><tex-math notation="LaTeX">$ \epsilon$</tex-math></inline-formula>-DP. Recognizing that several options can meet these conditions, we further derive an upper bound of the mean error&#x0027;s variance and specify the mathematical expression of&#x00A0;<inline-formula><tex-math notation="LaTeX">$ \epsilon$</tex-math></inline-formula> under such conditions. Numerical simulations are provided to demonstrate the effectiveness of our proposed algorithm.

源语言英语
页(从-至)1-16
页数16
期刊IEEE Transactions on Automatic Control
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Differential Privacy in Distributed Optimization with Gradient Tracking' 的科研主题。它们共同构成独一无二的指纹。

引用此