STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks

Xiyan Wu, Yong Song*, Ya Zhou*, Yurong Jiang, Yashuo Bai, Xinyi Li, Xin Yang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial Neural Networks (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property and temporal dynamics, SNNs have the potential to effectively extract spatio-temporal features from event streams. To leverage the temporal potential of SNNs, we propose a self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, which infers attention weights along both temporal and channel dimensions concurrently. It models global temporal and channel information correlations with self-attention, enabling the network to learn ‘what’ and ‘when’ to attend simultaneously. Our experimental results show that STCA-SNNs achieve better performance on N-MNIST (99.67%), CIFAR10-DVS (81.6%), and N-Caltech 101 (80.88%) compared with the state-of-the-art SNNs. Meanwhile, our ablation study demonstrates that STCA-SNNs improve the accuracy of event stream classification tasks.

Original languageEnglish
Article number1261543
JournalFrontiers in Neuroscience
Volume17
DOIs
Publication statusPublished - 2023

Keywords

  • event streams
  • neuromorphic computing
  • self-attention
  • spiking neural networks
  • temporal-channel

Fingerprint

Dive into the research topics of 'STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks'. Together they form a unique fingerprint.

Cite this