TY - JOUR
T1 - Spatio-Temporal Pruning for Training Ultra-Low-Latency Spiking Neural Networks in Remote Sensing Scene Classification
AU - Li, Jiahao
AU - Xu, Ming
AU - Chen, He
AU - Liu, Wenchao
AU - Chen, Liang
AU - Xie, Yizhuang
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/9
Y1 - 2024/9
N2 - In remote sensing scene classification (RSSC), restrictions on real-time processing on power consumption, performance, and resources necessitate the compression of neural networks. Unlike artificial neural networks (ANNs), spiking neural networks (SNNs) convey information through spikes, offering superior energy efficiency and biological plausibility. However, the high latency of SNNs restricts their practical application in RSSC. Therefore, there is an urgent need to research ultra-low-latency SNNs. As latency decreases, the performance of the SNN significantly deteriorates. To address this challenge, we propose a novel spatio-temporal pruning method that enhances the feature capture capability of ultra-low-latency SNNs. Our approach integrates spatial fundamental structures during the training process, which are subsequently pruned. We conduct a comprehensive evaluation of the impacts of these structures across classic network architectures, such as VGG and ResNet, demonstrating the generalizability of our method. Furthermore, we develop an ultra-low-latency training framework for SNNs to validate the effectiveness of our approach. In this paper, we successfully achieve high-performance ultra-low-latency SNNs with a single time step for the first time in RSSC. Remarkably, our SNN with one time step achieves at least 200 times faster inference time while maintaining a performance comparable to those of other state-of-the-art methods.
AB - In remote sensing scene classification (RSSC), restrictions on real-time processing on power consumption, performance, and resources necessitate the compression of neural networks. Unlike artificial neural networks (ANNs), spiking neural networks (SNNs) convey information through spikes, offering superior energy efficiency and biological plausibility. However, the high latency of SNNs restricts their practical application in RSSC. Therefore, there is an urgent need to research ultra-low-latency SNNs. As latency decreases, the performance of the SNN significantly deteriorates. To address this challenge, we propose a novel spatio-temporal pruning method that enhances the feature capture capability of ultra-low-latency SNNs. Our approach integrates spatial fundamental structures during the training process, which are subsequently pruned. We conduct a comprehensive evaluation of the impacts of these structures across classic network architectures, such as VGG and ResNet, demonstrating the generalizability of our method. Furthermore, we develop an ultra-low-latency training framework for SNNs to validate the effectiveness of our approach. In this paper, we successfully achieve high-performance ultra-low-latency SNNs with a single time step for the first time in RSSC. Remarkably, our SNN with one time step achieves at least 200 times faster inference time while maintaining a performance comparable to those of other state-of-the-art methods.
KW - remote sensing scene classification
KW - spiking neural networks
KW - ultra-low latency
UR - http://www.scopus.com/inward/record.url?scp=85218214204&partnerID=8YFLogxK
U2 - 10.3390/rs16173200
DO - 10.3390/rs16173200
M3 - Article
AN - SCOPUS:85218214204
SN - 2072-4292
VL - 16
JO - Remote Sensing
JF - Remote Sensing
IS - 17
M1 - 3200
ER -