TY - JOUR
T1 - Lightweight and Accurate DNN-Based Anomaly Detection at Edge
AU - Zhang, Qinglong
AU - Han, Rui
AU - Xin, Gaofeng
AU - Liu, Chi Harold
AU - Wang, Guoren
AU - Chen, Lydia Y.
N1 - Publisher Copyright:
© 1990-2012 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Deep neural networks (DNNs) have been showing significant success in various anomaly detection applications such as smart surveillance and industrial quality control. It is increasingly important to detect anomalies directly on edge devices, because of high responsiveness requirements and tight latency constraints. The accuracy of DNN-based solutions rely on large model capacity and thus long training and inference time, making them inapplicable on resource strenuous edge devices. It is hence imperative to scale DNN model sizes in correspondence to the run-time system requirements, i.e., meeting deadlines with minimal accuracy losses, which are highly dependent on the platforms and real-time system status. Existing scaling techniques either take long training time to pre-generate scaling options or disturb the unsteady training process of anomaly detection DNNs, lacking the adaptability to heterogeneous edge systems and incurring low inference accuracies. In this article, we present LightDNN to scale DNN models for anomaly detection applications at edge, featuring high detection accuracies with lightweight training and inference time. To this end, LightDNN quickly extracts and compresses blocks in a DNN, and provides large scaling space (e.g., 1 million options) by dynamically combining these compressed blocks online. At run-time, LightDNN predicts the DNN's inference latency according to the monitored system status, and optimizes the combination of blocks to maximize its accuracy under deadline constraints. We implement and extensively evaluate LightDNN on both CPU and GPU edge platforms using 8 popular anomaly detection workloads. Comparative experiments with state-of-the-art methods show that our approach provides 145.8 to 0.56 trillion times more scaling options without increasing training and inference overheads, thus achieving as much as 15.05% increase in accuracy under the same deadlines.
AB - Deep neural networks (DNNs) have been showing significant success in various anomaly detection applications such as smart surveillance and industrial quality control. It is increasingly important to detect anomalies directly on edge devices, because of high responsiveness requirements and tight latency constraints. The accuracy of DNN-based solutions rely on large model capacity and thus long training and inference time, making them inapplicable on resource strenuous edge devices. It is hence imperative to scale DNN model sizes in correspondence to the run-time system requirements, i.e., meeting deadlines with minimal accuracy losses, which are highly dependent on the platforms and real-time system status. Existing scaling techniques either take long training time to pre-generate scaling options or disturb the unsteady training process of anomaly detection DNNs, lacking the adaptability to heterogeneous edge systems and incurring low inference accuracies. In this article, we present LightDNN to scale DNN models for anomaly detection applications at edge, featuring high detection accuracies with lightweight training and inference time. To this end, LightDNN quickly extracts and compresses blocks in a DNN, and provides large scaling space (e.g., 1 million options) by dynamically combining these compressed blocks online. At run-time, LightDNN predicts the DNN's inference latency according to the monitored system status, and optimizes the combination of blocks to maximize its accuracy under deadline constraints. We implement and extensively evaluate LightDNN on both CPU and GPU edge platforms using 8 popular anomaly detection workloads. Comparative experiments with state-of-the-art methods show that our approach provides 145.8 to 0.56 trillion times more scaling options without increasing training and inference overheads, thus achieving as much as 15.05% increase in accuracy under the same deadlines.
KW - Anomaly detection
KW - DNN
KW - edge inference
KW - model scaling
KW - predictable latency
UR - http://www.scopus.com/inward/record.url?scp=85122328626&partnerID=8YFLogxK
U2 - 10.1109/TPDS.2021.3137631
DO - 10.1109/TPDS.2021.3137631
M3 - Article
AN - SCOPUS:85122328626
SN - 1045-9219
VL - 33
SP - 2927
EP - 2942
JO - IEEE Transactions on Parallel and Distributed Systems
JF - IEEE Transactions on Parallel and Distributed Systems
IS - 11
ER -