TY - GEN
T1 - Reliability Enhancement of Neural Networks via Neuron-Level Vulnerability Quantization
AU - Li, Keyao
AU - Wang, Jing
AU - Fu, Xin
AU - Sui, Xiufeng
AU - Zhang, Weigong
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Neural networks are increasingly used in recognition, mining and autonomous driving. However, for safety-critical applications, such as autonomous driving, the reliability of NN is an important area that remains largely unexplored. Fortunately, NN itself has fault-tolerance capability, especially, different neurons have different fault-tolerance capability. Thus applying uniform error protection mechanism while ignore this important feature will lead to unnecessary energy and performance overheads. In this paper, we propose a neuron vulnerability factor (NVF) quantifying the neural network vulnerability to soft error, which could provide a good guidance for error-tolerant techniques in NN. Based on NVF, we propose a computation scheduling scheme to reduce the lifetime of neurons with high NVF. The experiment results show that our proposed scheme can improve the accuracy of the neural network by 12% on average, and greatly reduce the fault-tolerant overhead.
AB - Neural networks are increasingly used in recognition, mining and autonomous driving. However, for safety-critical applications, such as autonomous driving, the reliability of NN is an important area that remains largely unexplored. Fortunately, NN itself has fault-tolerance capability, especially, different neurons have different fault-tolerance capability. Thus applying uniform error protection mechanism while ignore this important feature will lead to unnecessary energy and performance overheads. In this paper, we propose a neuron vulnerability factor (NVF) quantifying the neural network vulnerability to soft error, which could provide a good guidance for error-tolerant techniques in NN. Based on NVF, we propose a computation scheduling scheme to reduce the lifetime of neurons with high NVF. The experiment results show that our proposed scheme can improve the accuracy of the neural network by 12% on average, and greatly reduce the fault-tolerant overhead.
KW - Fault tolerance
KW - Memory protection
KW - Neural network
KW - Neuron Vulnerability Factor
KW - Reliability
KW - Soft error
UR - http://www.scopus.com/inward/record.url?scp=85082132407&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-38961-1_24
DO - 10.1007/978-3-030-38961-1_24
M3 - Conference contribution
AN - SCOPUS:85082132407
SN - 9783030389604
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 277
EP - 285
BT - Algorithms and Architectures for Parallel Processing - 19th International Conference, ICA3PP 2019, Proceedings
A2 - Wen, Sheng
A2 - Zomaya, Albert
A2 - Yang, Laurence T.
PB - Springer
T2 - 19th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2019
Y2 - 9 December 2019 through 11 December 2019
ER -