TY - JOUR
T1 - CFC-Net
T2 - A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote-Sensing Images
AU - Ming, Qi
AU - Miao, Lingjuan
AU - Zhou, Zhiqiang
AU - Dong, Yunpeng
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - Object detection in optical remote-sensing images is an important and challenging task. In recent years, the methods based on convolutional neural networks (CNNs) have made good progress. However, due to the large variation in object scale, aspect ratio, as well as the arbitrary orientation, the detection performance is difficult to be further improved. In this article, we discuss the role of discriminative features in object detection, and then propose a critical feature capturing network (CFC-Net) to improve detection accuracy from three aspects: building powerful feature representation, refining preset anchors, and optimizing label assignment. Specifically, we first decouple the classification and regression features, and then construct robust critical features adapted to the respective tasks of classification and regression through the polarization attention module (PAM). With the extracted discriminative regression features, the rotation anchor refinement module (R-ARM) performs localization refinement on preset horizontal anchors to obtain superior rotation anchors. Next, the dynamic anchor learning (DAL) strategy is given to adaptively select high-quality anchors based on their ability to capture critical features. The proposed framework creates more powerful semantic representations for objects in remote-sensing images and achieves high-performance real-time object detection. Experimental results on three remote-sensing datasets including HRSC2016, DOTA, and UCAS-AOD show that our method achieves superior detection performance compared with many state-of-the-art approaches. Code and models are available at https://github.com/ming71/CFC-Net.
AB - Object detection in optical remote-sensing images is an important and challenging task. In recent years, the methods based on convolutional neural networks (CNNs) have made good progress. However, due to the large variation in object scale, aspect ratio, as well as the arbitrary orientation, the detection performance is difficult to be further improved. In this article, we discuss the role of discriminative features in object detection, and then propose a critical feature capturing network (CFC-Net) to improve detection accuracy from three aspects: building powerful feature representation, refining preset anchors, and optimizing label assignment. Specifically, we first decouple the classification and regression features, and then construct robust critical features adapted to the respective tasks of classification and regression through the polarization attention module (PAM). With the extracted discriminative regression features, the rotation anchor refinement module (R-ARM) performs localization refinement on preset horizontal anchors to obtain superior rotation anchors. Next, the dynamic anchor learning (DAL) strategy is given to adaptively select high-quality anchors based on their ability to capture critical features. The proposed framework creates more powerful semantic representations for objects in remote-sensing images and achieves high-performance real-time object detection. Experimental results on three remote-sensing datasets including HRSC2016, DOTA, and UCAS-AOD show that our method achieves superior detection performance compared with many state-of-the-art approaches. Code and models are available at https://github.com/ming71/CFC-Net.
KW - Convolutional neural networks (CNNs)
KW - critical features
KW - deep learning
KW - object detection
UR - http://www.scopus.com/inward/record.url?scp=85110901408&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2021.3095186
DO - 10.1109/TGRS.2021.3095186
M3 - Article
AN - SCOPUS:85110901408
SN - 0196-2892
VL - 60
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
ER -