Abstract
Deep Learning(DL) techniques have gained significant importance in the recent past due to their vast applications. However, DL is still prone to several attacks, such as the Membership Inference Attack (MIA), based on the memorability of training data. MIA aims at determining the presence of specific data in the training dataset of the model with substitute model of similar structure to the objective model. As MIA relies on the substitute model, they can be mitigated if the substitute model is not clear about the network structure of the objective model. To solve the challenge of shadow-model construction, this work presents L-Leaks, a member inference attack based on Logits. L-Leaks allow an adversary to use the substitute model's information to predict the presence of membership if the shadow and objective model are similar enough. Here, the substitute model is built by learning the logits of the objective model, hence making it similar enough. This results in the substitute model having sufficient confidence in the member samples of the objective model. The evaluation of the attack's success shows that the proposed technique can execute the attack more accurately than existing techniques. It also shows that the proposed MIA is significantly robust under different network models and datasets.
Original language | English |
---|---|
Pages (from-to) | 3799-3808 |
Number of pages | 10 |
Journal | IEEE Transactions on Dependable and Secure Computing |
Volume | 20 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 Sept 2023 |
Keywords
- Deep learning
- logits
- membership inference attacks (MIAs)
- substitute model