Radar Target Recognition and Location Based on CapsNetv2

Jiaxing Hao*, Xuetian Wang, Sen Yang, Hongmin Gao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

For precise detection and positioning of weapons and equipment under complex ground backgrounds and weather-changing aerial backgrounds. Compared with the traditional convolutional neural networks, the Capsule Network (CapsNet) is more suitable for identifying weapons and equipment in complex backgrounds because it uses vectors as input for the first time, which can well retain the characteristic information such as the direction and the angle of the target. Therefore, this paper proposes a radar target classification algorithm based on the combination of CapsNetv2 and infrared lidar, which simplifies the convolutional layer of the traditional 9 × 9 capsule network through a 1 × 1 reduction layer and a 3 × 3 convolution kernel, and adopts a double-layer capsule layer. Two prediction frames are obtained to improve the recognition accuracy; at the same time, the output volume retains the direction and the angle, which can more accurately classify the radar targets in various complex backgrounds. Applying the method proposed in this article to the MSTAR dataset shows that the radar target positioning is accurate. The rate increases to 99.5%. Finally, compared with the AlexNet and the YOLOv4 methods designed by Alex Krizhevsky, the proposed radar target recognition method can accurately and quickly identify weapons and equipment from complex backgrounds. The results obtained from the CapsNetv2 are accurately compared with other methods' in complex backgrounds. The proposed method significantly improves the efficiency of military inspections.

Original languageEnglish
Article number4349009
JournalWireless Communications and Mobile Computing
Volume2022
DOIs
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'Radar Target Recognition and Location Based on CapsNetv2'. Together they form a unique fingerprint.

Cite this