摘要
Deep Learning(DL) techniques have gained significant importance in the recent past due to their vast applications. However, DL is still prone to several attacks, such as the Membership Inference Attack (MIA), based on the memorability of training data. MIA aims at determining the presence of specific data in the training dataset of the model with substitute model of similar structure to the objective model. As MIA relies on the substitute model, they can be mitigated if the substitute model is not clear about the network structure of the objective model. To solve the challenge of shadow-model construction, this work presents L-Leaks, a member inference attack based on Logits. L-Leaks allow an adversary to use the substitute model's information to predict the presence of membership if the shadow and objective model are similar enough. Here, the substitute model is built by learning the logits of the objective model, hence making it similar enough. This results in the substitute model having sufficient confidence in the member samples of the objective model. The evaluation of the attack's success shows that the proposed technique can execute the attack more accurately than existing techniques. It also shows that the proposed MIA is significantly robust under different network models and datasets.
源语言 | 英语 |
---|---|
页(从-至) | 3799-3808 |
页数 | 10 |
期刊 | IEEE Transactions on Dependable and Secure Computing |
卷 | 20 |
期 | 5 |
DOI | |
出版状态 | 已出版 - 1 9月 2023 |