TY - JOUR
T1 - Sea-Net:Visual Cognition-enabled Sample and Embedding Adaptive Network for SAR Image Object Classification
AU - Fan, Lili
AU - Zeng, Changxian
AU - Liu, Hongmei
AU - Liu, Jianjian
AU - Li, Yunjie
AU - Cao, Dongpu
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - In autonomous driving, the perception module typically utilizes a combination of millimeter-wave radar and LiDAR. However, when driving in challenging environmental conditions, this perception combination fails to effectively acquire geometric shape information about the surrounding environment. Therefore, we propose an alternative perception approach that employs Synthetic Aperture Radar (SAR). However, existing algorithms heavily rely on large-scale datasets. In light of this, we propose a meta-learning framework, named Sample and Embedding Adaptive Network (Sea-Net), for few-shot SAR image object classification. Furthermore, the semantics of SAR images and traditional optical images differ, and thus data enhancement methods that are effective on optical images are less so on SAR images. Based on this observation, we introduce a self-adaptive augmentation algorithm for the center of the target domain, which performs self-adaptive augmentation based on the semantic features of SAR images. The entire enhancement stage can be parallelized to speed up computation. Moreover, the imaging principle of SAR image results in coherent speckle noise with bright and dark interlacing in the image, causing a small difference between classes in the SAR image mapped to the embedded space. To address this issue, we propose the edge ambiguous embedding correction based on the self-attention mechanism. This method effectively increases the distance between different classes. Our experimental results on the MSTAR dataset demonstrate that the proposed model outperforms existing methods.
AB - In autonomous driving, the perception module typically utilizes a combination of millimeter-wave radar and LiDAR. However, when driving in challenging environmental conditions, this perception combination fails to effectively acquire geometric shape information about the surrounding environment. Therefore, we propose an alternative perception approach that employs Synthetic Aperture Radar (SAR). However, existing algorithms heavily rely on large-scale datasets. In light of this, we propose a meta-learning framework, named Sample and Embedding Adaptive Network (Sea-Net), for few-shot SAR image object classification. Furthermore, the semantics of SAR images and traditional optical images differ, and thus data enhancement methods that are effective on optical images are less so on SAR images. Based on this observation, we introduce a self-adaptive augmentation algorithm for the center of the target domain, which performs self-adaptive augmentation based on the semantic features of SAR images. The entire enhancement stage can be parallelized to speed up computation. Moreover, the imaging principle of SAR image results in coherent speckle noise with bright and dark interlacing in the image, causing a small difference between classes in the SAR image mapped to the embedded space. To address this issue, we propose the edge ambiguous embedding correction based on the self-attention mechanism. This method effectively increases the distance between different classes. Our experimental results on the MSTAR dataset demonstrate that the proposed model outperforms existing methods.
KW - Data models
KW - Few-shot
KW - Metalearning
KW - Radar
KW - Radar imaging
KW - Radar polarimetry
KW - Synthetic aperture radar
KW - Task analysis
KW - data aumentation
KW - meta-learning
KW - synthetic aperture radar (SAR)
UR - https://www.scopus.com/pages/publications/85174856732
U2 - 10.1109/TIV.2023.3326169
DO - 10.1109/TIV.2023.3326169
M3 - Article
AN - SCOPUS:85174856732
SN - 2379-8858
SP - 1
EP - 14
JO - IEEE Transactions on Intelligent Vehicles
JF - IEEE Transactions on Intelligent Vehicles
ER -