TY - GEN
T1 - Automatic LPI Radar Modulation Recognition Using Improved VMamba Network
AU - Liu, Jiangli
AU - Li, Yunjie
AU - Zhang, Ziwei
AU - Zhu, Mengtao
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Low probability of intercept (LPI) radar signals have been widely used with characteristics of complex intrapulse modulations, low transmission power, and wide frequency band. The energy of a LPI radar signal spreads across the timefrequency domain, which poses great challenges to traditional methods in signal detection and recognition. To address these challenges, this paper investigates the automatic modulation recognition (AMR) method of LPI radar signals based on the VMamba model. The VMamba model is a visual state space model originally designed for computer vision tasks. Compared with traditional deep learning models, the VMamba model achieves linear complexity without sacrificing the global receptive field. This paper designs an AMR method based on the improvement (such as adjusting the patch size and scanning directions) of the classic VMamba model. The superiority of the proposed method compared with traditional methods based on deep convolutional neural network (DCNN) is verified through simulation experiments under varying signal-to-noise ratio (SNR) conditions. The recognition performance of the proposed method is improved significantly. Specifically, the improved VMamba outperformed two DCNN-based methods by 22.07% and 14.97%, respectively, at an SNR of -12 dB.
AB - Low probability of intercept (LPI) radar signals have been widely used with characteristics of complex intrapulse modulations, low transmission power, and wide frequency band. The energy of a LPI radar signal spreads across the timefrequency domain, which poses great challenges to traditional methods in signal detection and recognition. To address these challenges, this paper investigates the automatic modulation recognition (AMR) method of LPI radar signals based on the VMamba model. The VMamba model is a visual state space model originally designed for computer vision tasks. Compared with traditional deep learning models, the VMamba model achieves linear complexity without sacrificing the global receptive field. This paper designs an AMR method based on the improvement (such as adjusting the patch size and scanning directions) of the classic VMamba model. The superiority of the proposed method compared with traditional methods based on deep convolutional neural network (DCNN) is verified through simulation experiments under varying signal-to-noise ratio (SNR) conditions. The recognition performance of the proposed method is improved significantly. Specifically, the improved VMamba outperformed two DCNN-based methods by 22.07% and 14.97%, respectively, at an SNR of -12 dB.
KW - automatic modulation recognition
KW - timefrequency analysis
KW - visual Mamba
UR - https://www.scopus.com/pages/publications/105009409827
U2 - 10.1109/RADAR52380.2025.11031865
DO - 10.1109/RADAR52380.2025.11031865
M3 - Conference contribution
AN - SCOPUS:105009409827
T3 - Proceedings of the IEEE Radar Conference
BT - IEEE International Radar Conference, RADAR 2025
PB - Institute of Electrical and Electronics Engineers
T2 - 2025 IEEE International Radar Conference, RADAR 2025
Y2 - 3 May 2025 through 9 May 2025
ER -