TY - JOUR
T1 - Mixed-Precision Quantization for CNN-Based Remote Sensing Scene Classification
AU - Wei, Xin
AU - Chen, He
AU - Liu, Wenchao
AU - Xie, Yizhuang
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - Extensive convolutional neural network (CNN)-based methods have been widely used in remote sensing scene classification. However, the dense operation and huge memory storage of the state-of-the-art models hinder their deployment on low-power embedded devices. In this letter, we propose a mixed-precision quantization method to compress the model size without accuracy degradation. In this method, we propose a symmetric nonlinear quantization scheme to reduce the quantization error. A corresponding three-step training strategy is proposed to improve the performance of the quantized network. Finally, based on the proposed scheme and training strategy, we propose a neural architecture search (NAS)-based quantization bit-width search (NQBS) method. This method can automatically select a bit width for each quantized layer to obtain a mixed-precision network with an optimal model size. We apply the proposed method to the ResNet-34 and SqueezeNet networks and evaluate the quantized networks on the NWPU-RESISC45 data set. The experimental results show that the mixed-precision quantized networks under the proposed method strike a satisfying tradeoff between classification accuracy and model size.
AB - Extensive convolutional neural network (CNN)-based methods have been widely used in remote sensing scene classification. However, the dense operation and huge memory storage of the state-of-the-art models hinder their deployment on low-power embedded devices. In this letter, we propose a mixed-precision quantization method to compress the model size without accuracy degradation. In this method, we propose a symmetric nonlinear quantization scheme to reduce the quantization error. A corresponding three-step training strategy is proposed to improve the performance of the quantized network. Finally, based on the proposed scheme and training strategy, we propose a neural architecture search (NAS)-based quantization bit-width search (NQBS) method. This method can automatically select a bit width for each quantized layer to obtain a mixed-precision network with an optimal model size. We apply the proposed method to the ResNet-34 and SqueezeNet networks and evaluate the quantized networks on the NWPU-RESISC45 data set. The experimental results show that the mixed-precision quantized networks under the proposed method strike a satisfying tradeoff between classification accuracy and model size.
KW - Mixed-precision quantization
KW - neural architecture search (NAS)-based quantization bit-width search (NQBS)
KW - remote sensing scene classification
KW - three-step training (TST) strategy
UR - http://www.scopus.com/inward/record.url?scp=85116368473&partnerID=8YFLogxK
U2 - 10.1109/LGRS.2020.3007575
DO - 10.1109/LGRS.2020.3007575
M3 - Article
AN - SCOPUS:85116368473
SN - 1545-598X
VL - 18
SP - 1721
EP - 1725
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
IS - 10
ER -