Abstract
A UAV auto-landing navigation method based on deep convolutional neural network image semantic segmentation is proposed for the application scenarios of UAVs auto-landing in complex electromagnetic combat environments. Firstly, a lightweight and efficient end-to-end runway detection neural network named RunwayNet is designed. In the feature extraction part ShuffleNet V2 is reformed by using void convolution to get a trunk network with adjustable output feature graph resolution. A self-attention module based on the self-attention mechanism is designed so that the network has global runway feature extraction capabilities. Secondly, a decoder module is designed by fusing the rich details, the spatial location information of the low-level layers with the rough, abstract semantic segmentation information of the high-level layers to obtain a fine runway detection output. Finally, an algorithm of edge line extraction and pose estimation based on the segmented area of runway is proposed to realize relative pose calculation. The results of simulations and airborn experiments show that the precise segmentation and recognition of the runway area during the landing of the drone can be realized by the embedded real-time computing platform. The operating distance can reach 3 km and the success rate is close to 90%. The problems of runway identification blind area and real time in the landing process is solved, and the robustness of UAV landing in complex environment is significantly improved.
| Translated title of the contribution | Image semantic segmentation-based navigation method for UAV auto-landing |
|---|---|
| Original language | Chinese (Traditional) |
| Pages (from-to) | 586-594 |
| Number of pages | 9 |
| Journal | Zhongguo Guanxing Jishu Xuebao/Journal of Chinese Inertial Technology |
| Volume | 28 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - Oct 2020 |