TY - JOUR
T1 - Towards efficient full 8-bit integer DNN online training on resource-limited devices without batch normalization
AU - Yang, Yukuan
AU - Chi, Xiaowei
AU - Deng, Lei
AU - Yan, Tianyi
AU - Gao, Feng
AU - Li, Guoqi
N1 - Publisher Copyright:
© 2022
PY - 2022/10/28
Y1 - 2022/10/28
N2 - Huge computational costs brought by convolution and batch normalization (BN) have caused great challenges for the online training and corresponding applications of deep neural networks (DNNs), especially in resource-limited devices. Existing works only focus on the convolution or BN acceleration and no solution can alleviate both problems with satisfactory performance. Online training has gradually become a trend in resource-limited devices like mobile phones while there is still no complete technical scheme with acceptable model performance, processing speed, and computational cost. In this research, an efficient online-training quantization framework termed EOQ for abbreviation is proposed by combining Fixup initialization and a novel quantization scheme for the online training in resource-limited devices. Based on the proposed framework, we have successfully realized full 8-bit integer network training and removed BN in large-scale DNNs. Especially, weight updates are quantized to 8-bit integers for the first time. Theoretical analyses of EOQ utilizing Fixup initialization for removing BN have been further given using a novel Block Dynamical Isometry theory with weaker assumptions. Benefiting from rational quantization strategies and the absence of BN, the full 8-bit networks based on EOQ can achieve state-of-the-art accuracy and immense advantages in computational cost and processing speed. Experiments show that the 8-bit EOQ networks achieve 2.78%, 3.85%, and 4.31% accuracy improvements compared with existing full 8-bit integer networks in ResNet-18/34/50. At the same time, the 8-bit EOQ networks can improve the computing speed greatly, and decrease the power consumption and circuit area by about an order of magnitude compared with 32-bit floating-point vanilla networks. In addition to the huge advantages brought by quantization in convolution operations, 8-bit networks based on EOQ without BN can realize >66× lower in power, >13× faster in the processing speed compared with the traditional 32-bit floating-point BN in the inference process. What's more, the design of deep learning chips can be profoundly simplified in the absence of unfriendly square root operations in BN. Beyond this, EOQ has been evidenced to be more advantageous in small-batch online training with fewer batch samples. In summary, the EOQ framework is specially designed for reducing the high cost of convolution and BN in network training, demonstrating a broad application prospect of online training in resource-limited devices.
AB - Huge computational costs brought by convolution and batch normalization (BN) have caused great challenges for the online training and corresponding applications of deep neural networks (DNNs), especially in resource-limited devices. Existing works only focus on the convolution or BN acceleration and no solution can alleviate both problems with satisfactory performance. Online training has gradually become a trend in resource-limited devices like mobile phones while there is still no complete technical scheme with acceptable model performance, processing speed, and computational cost. In this research, an efficient online-training quantization framework termed EOQ for abbreviation is proposed by combining Fixup initialization and a novel quantization scheme for the online training in resource-limited devices. Based on the proposed framework, we have successfully realized full 8-bit integer network training and removed BN in large-scale DNNs. Especially, weight updates are quantized to 8-bit integers for the first time. Theoretical analyses of EOQ utilizing Fixup initialization for removing BN have been further given using a novel Block Dynamical Isometry theory with weaker assumptions. Benefiting from rational quantization strategies and the absence of BN, the full 8-bit networks based on EOQ can achieve state-of-the-art accuracy and immense advantages in computational cost and processing speed. Experiments show that the 8-bit EOQ networks achieve 2.78%, 3.85%, and 4.31% accuracy improvements compared with existing full 8-bit integer networks in ResNet-18/34/50. At the same time, the 8-bit EOQ networks can improve the computing speed greatly, and decrease the power consumption and circuit area by about an order of magnitude compared with 32-bit floating-point vanilla networks. In addition to the huge advantages brought by quantization in convolution operations, 8-bit networks based on EOQ without BN can realize >66× lower in power, >13× faster in the processing speed compared with the traditional 32-bit floating-point BN in the inference process. What's more, the design of deep learning chips can be profoundly simplified in the absence of unfriendly square root operations in BN. Beyond this, EOQ has been evidenced to be more advantageous in small-batch online training with fewer batch samples. In summary, the EOQ framework is specially designed for reducing the high cost of convolution and BN in network training, demonstrating a broad application prospect of online training in resource-limited devices.
KW - Full 8-bit quantization
KW - Network without batch normalization
KW - Online training
KW - Resource-limited devices
KW - Small batch
UR - http://www.scopus.com/inward/record.url?scp=85138023955&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2022.08.045
DO - 10.1016/j.neucom.2022.08.045
M3 - Article
AN - SCOPUS:85138023955
SN - 0925-2312
VL - 511
SP - 175
EP - 186
JO - Neurocomputing
JF - Neurocomputing
ER -