TY - GEN
T1 - Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation
AU - Wang, Hanqing
AU - Liang, Wei
AU - Shen, Jianbing
AU - Van Gool, Luc
AU - Wang, Wenguan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Since the rise of vision-language navigation (VLN), great progress has been made in instruction following - building a follower to navigate environments under the guidance of instructions. However, far less attention has been paid to the inverse task: instruction generation - learning a speaker to generate grounded descriptions for navigation routes. Existing VLN methods train a speaker independently and often treat it as a data augmentation tool to strengthen the follower, while ignoring rich cross-task relations. Here we describe an approach that learns the two tasks simultaneously and exploits their intrinsic correlations to boost the training of each: the follower judges whether the speaker-created instruction explains the original navigation route correctly, and vice versa. Without the need of aligned instruction-path pairs, such cycle-consistent learning scheme is complementary to task-specific training targets defined on labeled data, and can also be applied over unlabeled paths (sampled without paired instructions). Another agent, called creator is added to generate counterfactual environments. It greatly changes current scenes yet leaves novel items - which are vital for the execution of original instructions - unchanged. Thus more informative training scenes are synthesized and the three agents compose a powerful VLN learning system. Extensive experiments on a standard benchmark show that our approach improves the performance of various follower models and produces accurate navigation instructions.
AB - Since the rise of vision-language navigation (VLN), great progress has been made in instruction following - building a follower to navigate environments under the guidance of instructions. However, far less attention has been paid to the inverse task: instruction generation - learning a speaker to generate grounded descriptions for navigation routes. Existing VLN methods train a speaker independently and often treat it as a data augmentation tool to strengthen the follower, while ignoring rich cross-task relations. Here we describe an approach that learns the two tasks simultaneously and exploits their intrinsic correlations to boost the training of each: the follower judges whether the speaker-created instruction explains the original navigation route correctly, and vice versa. Without the need of aligned instruction-path pairs, such cycle-consistent learning scheme is complementary to task-specific training targets defined on labeled data, and can also be applied over unlabeled paths (sampled without paired instructions). Another agent, called creator is added to generate counterfactual environments. It greatly changes current scenes yet leaves novel items - which are vital for the execution of original instructions - unchanged. Thus more informative training scenes are synthesized and the three agents compose a powerful VLN learning system. Extensive experiments on a standard benchmark show that our approach improves the performance of various follower models and produces accurate navigation instructions.
KW - Vision + language
UR - http://www.scopus.com/inward/record.url?scp=85135456456&partnerID=8YFLogxK
U2 - 10.1109/CVPR52688.2022.01503
DO - 10.1109/CVPR52688.2022.01503
M3 - Conference contribution
AN - SCOPUS:85135456456
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 15450
EP - 15460
BT - Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
PB - IEEE Computer Society
T2 - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Y2 - 19 June 2022 through 24 June 2022
ER -