TY - GEN
T1 - Finding Visual Saliency in Continuous Spike Stream
AU - Zhu, Lin
AU - Chen, Xianzhang
AU - Wang, Xiao
AU - Huang, Hua
N1 - Publisher Copyright:
Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org).All rights reserved.
PY - 2024/3/25
Y1 - 2024/3/25
N2 - As a bio-inspired vision sensor, the spike camera emulates the operational principles of the fovea, a compact retinal region, by employing spike discharges to encode the accumulation of per-pixel luminance intensity.Leveraging its high temporal resolution and bio-inspired neuromorphic design, the spike camera holds significant promise for advancing computer vision applications.Saliency detection mimics the behavior of human beings and captures the most salient region from the scenes.In this paper, we investigate the visual saliency in the continuous spike stream for the first time.To effectively process the binary spike stream, we propose a Recurrent Spiking Transformer (RST) framework, which is based on a full spiking neural network.Our framework enables the extraction of spatio-temporal features from the continuous spatio-temporal spike stream while maintaining low power consumption.To facilitate the training and validation of our proposed model, we build a comprehensive real-world spike-based visual saliency dataset, enriched with numerous light conditions.Extensive experiments demonstrate the superior performance of our Recurrent Spiking Transformer framework in comparison to other spike neural network-based methods.Our framework exhibits a substantial margin of improvement in capturing and highlighting visual saliency in the spike stream, which not only provides a new perspective for spike-based saliency segmentation but also shows a new paradigm for full SNN-based transformer models.The code and dataset are available at https://github.com/BIT-Vision/SVS.
AB - As a bio-inspired vision sensor, the spike camera emulates the operational principles of the fovea, a compact retinal region, by employing spike discharges to encode the accumulation of per-pixel luminance intensity.Leveraging its high temporal resolution and bio-inspired neuromorphic design, the spike camera holds significant promise for advancing computer vision applications.Saliency detection mimics the behavior of human beings and captures the most salient region from the scenes.In this paper, we investigate the visual saliency in the continuous spike stream for the first time.To effectively process the binary spike stream, we propose a Recurrent Spiking Transformer (RST) framework, which is based on a full spiking neural network.Our framework enables the extraction of spatio-temporal features from the continuous spatio-temporal spike stream while maintaining low power consumption.To facilitate the training and validation of our proposed model, we build a comprehensive real-world spike-based visual saliency dataset, enriched with numerous light conditions.Extensive experiments demonstrate the superior performance of our Recurrent Spiking Transformer framework in comparison to other spike neural network-based methods.Our framework exhibits a substantial margin of improvement in capturing and highlighting visual saliency in the spike stream, which not only provides a new perspective for spike-based saliency segmentation but also shows a new paradigm for full SNN-based transformer models.The code and dataset are available at https://github.com/BIT-Vision/SVS.
UR - https://www.scopus.com/pages/publications/85189556645
U2 - 10.1609/aaai.v38i7.28610
DO - 10.1609/aaai.v38i7.28610
M3 - Conference contribution
AN - SCOPUS:85189556645
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 7757
EP - 7765
BT - Technical Tracks 14
A2 - Wooldridge, Michael
A2 - Dy, Jennifer
A2 - Natarajan, Sriraam
PB - Association for the Advancement of Artificial Intelligence
T2 - 38th AAAI Conference on Artificial Intelligence, AAAI 2024
Y2 - 20 February 2024 through 27 February 2024
ER -