Visual End-to-End Autonomous Navigation System for UAV

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper constructs a deep reinforcement learning navigation framework for indoor unknown scenes, which takes visual information and drone motion information as inputs. By extracting and integrating visual features, motion features, and temporal features, the adaptability of drones to complex environments and their ability to transfer between different environments have been improved. Based on the AirSim simulation environment, a discrete action set of unmanned aerial vehicles was designed and experimentally validated for target point navigation in different indoor environments. The experiment shows that the navigation network in this article can effectively complete various navigation tasks and has a certain degree of generalization.

Original languageEnglish
Title of host publicationProceedings of 2024 12th China Conference on Command and Control
PublisherSpringer Science and Business Media Deutschland GmbH
Pages309-320
Number of pages12
ISBN (Print)9789819777730
DOIs
Publication statusPublished - 2024
Event12th China Conference on Command and Control, C2 2024 - Beijing, China
Duration: 17 May 202418 May 2024

Publication series

NameLecture Notes in Electrical Engineering
Volume1267 LNEE
ISSN (Print)1876-1100
ISSN (Electronic)1876-1119

Conference

Conference12th China Conference on Command and Control, C2 2024
Country/TerritoryChina
CityBeijing
Period17/05/2418/05/24

Keywords

  • Autonomous navigation
  • Deep reinforcement learning
  • UAV

Fingerprint

Dive into the research topics of 'Visual End-to-End Autonomous Navigation System for UAV'. Together they form a unique fingerprint.

Cite this