Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy

Shaorui Song, Lei Xu*, Liehuang Zhu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Local differential privacy (LDP) is a promising technique to realize privacy-preserving data aggregation without a trusted aggregator. Normally, an LDP protocol requires each user to locally perturb his raw data and submit the perturbed data to the aggregator. Consequently, LDP is vulnerable to output poisoning attacks. Malicious users can skip the perturbation and submit carefully crafted data to the aggregator, altering the data aggregation results. Existing verifiable LDP protocols, which can verify the perturbation process and prevent output poisoning attacks, usually incur significant computation and communication costs, due to the use of zero-knowledge proofs. In this paper, we analyze the attacks on two classic LDP protocols for frequency estimation, namely GRR and OUE, and propose two verifiable LDP protocols. The proposed protocols are based on an interactive framework, where the user and the aggregator complete the perturbation together. By providing some additional information, which reveals nothing about the raw data but helps the verification, the user can convince the aggregator that he is incapable of launching an output poisoning attack. Simulation results demonstrate that the proposed protocols have good defensive performance and outperform existing approaches in terms of efficiency.

源语言英语
页(从-至)5506-5521
页数16
期刊IEEE Transactions on Information Forensics and Security
18
DOI
出版状态已出版 - 2023

指纹

探究 'Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy' 的科研主题。它们共同构成独一无二的指纹。

引用此