Membership Inference Attacks Against Deep Learning Models via Logits Distribution

Hongyang Yan, Shuhao Li, Yajie Wang, Yaoyuan Zhang, Kashif Sharif, Haibo Hu, Yuanzhang Li*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

Deep Learning(DL) techniques have gained significant importance in the recent past due to their vast applications. However, DL is still prone to several attacks, such as the Membership Inference Attack (MIA), based on the memorability of training data. MIA aims at determining the presence of specific data in the training dataset of the model with substitute model of similar structure to the objective model. As MIA relies on the substitute model, they can be mitigated if the substitute model is not clear about the network structure of the objective model. To solve the challenge of shadow-model construction, this work presents L-Leaks, a member inference attack based on Logits. L-Leaks allow an adversary to use the substitute model's information to predict the presence of membership if the shadow and objective model are similar enough. Here, the substitute model is built by learning the logits of the objective model, hence making it similar enough. This results in the substitute model having sufficient confidence in the member samples of the objective model. The evaluation of the attack's success shows that the proposed technique can execute the attack more accurately than existing techniques. It also shows that the proposed MIA is significantly robust under different network models and datasets.

源语言英语
页(从-至)3799-3808
页数10
期刊IEEE Transactions on Dependable and Secure Computing
20
5
DOI
出版状态已出版 - 1 9月 2023

指纹

探究 'Membership Inference Attacks Against Deep Learning Models via Logits Distribution' 的科研主题。它们共同构成独一无二的指纹。

引用此