TY - JOUR
T1 - Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy
AU - Song, Shaorui
AU - Xu, Lei
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Local differential privacy (LDP) is a promising technique to realize privacy-preserving data aggregation without a trusted aggregator. Normally, an LDP protocol requires each user to locally perturb his raw data and submit the perturbed data to the aggregator. Consequently, LDP is vulnerable to output poisoning attacks. Malicious users can skip the perturbation and submit carefully crafted data to the aggregator, altering the data aggregation results. Existing verifiable LDP protocols, which can verify the perturbation process and prevent output poisoning attacks, usually incur significant computation and communication costs, due to the use of zero-knowledge proofs. In this paper, we analyze the attacks on two classic LDP protocols for frequency estimation, namely GRR and OUE, and propose two verifiable LDP protocols. The proposed protocols are based on an interactive framework, where the user and the aggregator complete the perturbation together. By providing some additional information, which reveals nothing about the raw data but helps the verification, the user can convince the aggregator that he is incapable of launching an output poisoning attack. Simulation results demonstrate that the proposed protocols have good defensive performance and outperform existing approaches in terms of efficiency.
AB - Local differential privacy (LDP) is a promising technique to realize privacy-preserving data aggregation without a trusted aggregator. Normally, an LDP protocol requires each user to locally perturb his raw data and submit the perturbed data to the aggregator. Consequently, LDP is vulnerable to output poisoning attacks. Malicious users can skip the perturbation and submit carefully crafted data to the aggregator, altering the data aggregation results. Existing verifiable LDP protocols, which can verify the perturbation process and prevent output poisoning attacks, usually incur significant computation and communication costs, due to the use of zero-knowledge proofs. In this paper, we analyze the attacks on two classic LDP protocols for frequency estimation, namely GRR and OUE, and propose two verifiable LDP protocols. The proposed protocols are based on an interactive framework, where the user and the aggregator complete the perturbation together. By providing some additional information, which reveals nothing about the raw data but helps the verification, the user can convince the aggregator that he is incapable of launching an output poisoning attack. Simulation results demonstrate that the proposed protocols have good defensive performance and outperform existing approaches in terms of efficiency.
KW - Local differential privacy
KW - Pedersen commitment
KW - frequency estimation
KW - poisoning attacks
KW - randomized response
KW - verifiable protocols
UR - http://www.scopus.com/inward/record.url?scp=85168292459&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2023.3305873
DO - 10.1109/TIFS.2023.3305873
M3 - Article
AN - SCOPUS:85168292459
SN - 1556-6013
VL - 18
SP - 5506
EP - 5521
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -