Abstract
The unmanned lunar rover is essential for lunar exploration and construction. Executing environment differ from what humans get since communication needs time from Earth to Moon. Considering possible discrepancies between the pre-considered environment from the planner and the real environment for sampling tasks on the moon, a planner that generates short plans quickly should be used. Therefore, a planner for both standard and emergency planning based on deep reinforcement learning (DRL) is demonstrated in this paper. This planner can create a full-range plan 13.5 times faster than the traditional planner on complex problems or 10.1 times faster while controlling the rover step-by-step in the state space. Based on a specific moon sampling scenario, we propose a tracking reward guiding the rover searching in the states in the deep reinforcement learning architecture which is presented and created by a state space representation by matrix, randomly available training state pairs and the plans generated by a custom breadth-first search (BFS) planner for the tracking reward. The BFS planner obtains a custom state hash algorithm and a preparation to train state pairs for safety and flexibility. Tests on training and planning are performed to validate the effectiveness, robustness and customization of the proposed method in a planning domain with multiple rovers. Our model can handle three kinds of emergencies, even if they occur frequently. The success rate is beyond the state-of-the-art model. While facing emergencies, the average response time of our model is 324 times faster than the classical planner.
Original language | English |
---|---|
Article number | 107287 |
Journal | Engineering Applications of Artificial Intelligence |
Volume | 127 |
DOIs | |
Publication status | Published - Jan 2024 |
Keywords
- Adaptive planner
- Automated planning
- Deep reinforcement learning
- Lunar rover