TY - JOUR
T1 - 基于图像语义分割的无人机自主着陆导航方法
AU - Shang, Kejun
AU - Zheng, Xin
AU - Wang, Liujun
AU - Hu, Guangfeng
AU - Liu, Chongliang
N1 - Publisher Copyright:
© 2020, Editorial Department of Journal of Chinese Inertial Technology. All right reserved.
PY - 2020/10
Y1 - 2020/10
N2 - A UAV auto-landing navigation method based on deep convolutional neural network image semantic segmentation is proposed for the application scenarios of UAVs auto-landing in complex electromagnetic combat environments. Firstly, a lightweight and efficient end-to-end runway detection neural network named RunwayNet is designed. In the feature extraction part ShuffleNet V2 is reformed by using void convolution to get a trunk network with adjustable output feature graph resolution. A self-attention module based on the self-attention mechanism is designed so that the network has global runway feature extraction capabilities. Secondly, a decoder module is designed by fusing the rich details, the spatial location information of the low-level layers with the rough, abstract semantic segmentation information of the high-level layers to obtain a fine runway detection output. Finally, an algorithm of edge line extraction and pose estimation based on the segmented area of runway is proposed to realize relative pose calculation. The results of simulations and airborn experiments show that the precise segmentation and recognition of the runway area during the landing of the drone can be realized by the embedded real-time computing platform. The operating distance can reach 3 km and the success rate is close to 90%. The problems of runway identification blind area and real time in the landing process is solved, and the robustness of UAV landing in complex environment is significantly improved.
AB - A UAV auto-landing navigation method based on deep convolutional neural network image semantic segmentation is proposed for the application scenarios of UAVs auto-landing in complex electromagnetic combat environments. Firstly, a lightweight and efficient end-to-end runway detection neural network named RunwayNet is designed. In the feature extraction part ShuffleNet V2 is reformed by using void convolution to get a trunk network with adjustable output feature graph resolution. A self-attention module based on the self-attention mechanism is designed so that the network has global runway feature extraction capabilities. Secondly, a decoder module is designed by fusing the rich details, the spatial location information of the low-level layers with the rough, abstract semantic segmentation information of the high-level layers to obtain a fine runway detection output. Finally, an algorithm of edge line extraction and pose estimation based on the segmented area of runway is proposed to realize relative pose calculation. The results of simulations and airborn experiments show that the precise segmentation and recognition of the runway area during the landing of the drone can be realized by the embedded real-time computing platform. The operating distance can reach 3 km and the success rate is close to 90%. The problems of runway identification blind area and real time in the landing process is solved, and the robustness of UAV landing in complex environment is significantly improved.
KW - Image semantic segmentation
KW - Pose estimation
KW - Runway detection
KW - Self-attention module
UR - http://www.scopus.com/inward/record.url?scp=85101185691&partnerID=8YFLogxK
U2 - 10.13695/j.cnki.12-1222/o3.2020.05.004
DO - 10.13695/j.cnki.12-1222/o3.2020.05.004
M3 - 文章
AN - SCOPUS:85101185691
SN - 1005-6734
VL - 28
SP - 586
EP - 594
JO - Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology
JF - Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology
IS - 5
ER -