摘要
Deep neural networks (DNNs) are increasingly used as the critical component of applications, bringing high computational costs. Many practitioners host their models on third-party platforms. This practice exposes DNNs to risks: A third party hosting the model may use a malicious deep learning framework to implement a backdoor attack. Our goal is to develop the realistic potential for backdoor attacks in third-party hosting platforms. We introduce a threatening and realistically implementable backdoor attack that is highly stealthy and flexible. We inject trojans by hijacking the built-in functions of the deep learning framework. Existing backdoor attacks rely on poisoning; its trigger is a special pattern superimposed on the input. Unlike existing backdoor attacks, the proposed sequential trigger is a specific sequence of clean image sets. Moreover, our attack is model agnostic and does not require retraining the model or modifying the parameters. Its stealthy is that injecting trojans will not change the model's prediction for a clean image, so existing backdoor defenses cannot detect it. Its flexibility lies in that adversary can remodify the trojan behavior at any time. Extensive experiments on multiple benchmarks with different frameworks demonstrate that our attack achieves a perfect success rate (up to 100%) with minimal damage to model performance. And we can inject multiple trojans which do not affect each other at the same time, trojans hidden in the framework make a universal backdoor attack possible. Analysis and experiments further show that state-of-the-art defenses are ineffective against our attacks. Our work suggests that backdoor attacks in the supply chain need to be urgently explored.
源语言 | 英语 |
---|---|
页(从-至) | 1789-1798 |
页数 | 10 |
期刊 | IEEE Transactions on Dependable and Secure Computing |
卷 | 20 |
期 | 3 |
DOI | |
出版状态 | 已出版 - 1 5月 2023 |