TY - JOUR
T1 - Active Perception for Visual-Language Navigation
AU - Wang, Hanqing
AU - Wang, Wenguan
AU - Liang, Wei
AU - Hoi, Steven C.H.
AU - Shen, Jianbing
AU - Gool, Luc Van
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/3
Y1 - 2023/3
N2 - Visual-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments. One of the key challenges in VLN is how to conduct robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment. Agents trained by current approaches typically suffer from this and would consequently struggle to take navigation actions at every step. In contrast, when humans face such a challenge, we can still maintain robust navigation by actively exploring the surroundings to gather more information and thus make a more confident navigation decision. This work draws inspiration from human navigation behavior and endows an agent with an active perception ability for more intelligent navigation. To achieve this, we propose an end-to-end framework for learning an exploration policy that decides (i) when and where to explore, (ii) what information is worth gathering during exploration, and (iii) how to adjust the navigation decision after the exploration. In this way, the agent is able to turn its past experiences as well as new explored knowledge to contexts for more confident navigation decision making. In addition, an external memory is used to explicitly store the visited visual environments and thus allows the agent to adopt a late action-taking strategy to avoid duplicate exploration and navigation movements. Our experimental results on two standard benchmark datasets show promising exploration strategies emerged from training, which leads to significant boost in navigation performance.
AB - Visual-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments. One of the key challenges in VLN is how to conduct robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment. Agents trained by current approaches typically suffer from this and would consequently struggle to take navigation actions at every step. In contrast, when humans face such a challenge, we can still maintain robust navigation by actively exploring the surroundings to gather more information and thus make a more confident navigation decision. This work draws inspiration from human navigation behavior and endows an agent with an active perception ability for more intelligent navigation. To achieve this, we propose an end-to-end framework for learning an exploration policy that decides (i) when and where to explore, (ii) what information is worth gathering during exploration, and (iii) how to adjust the navigation decision after the exploration. In this way, the agent is able to turn its past experiences as well as new explored knowledge to contexts for more confident navigation decision making. In addition, an external memory is used to explicitly store the visited visual environments and thus allows the agent to adopt a late action-taking strategy to avoid duplicate exploration and navigation movements. Our experimental results on two standard benchmark datasets show promising exploration strategies emerged from training, which leads to significant boost in navigation performance.
KW - Active perception
KW - Curriculum reinforcement learning
KW - Visual-language navigation
UR - http://www.scopus.com/inward/record.url?scp=85143213809&partnerID=8YFLogxK
U2 - 10.1007/s11263-022-01721-6
DO - 10.1007/s11263-022-01721-6
M3 - Article
AN - SCOPUS:85143213809
SN - 0920-5691
VL - 131
SP - 607
EP - 625
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
IS - 3
ER -