Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration

Weiquan Liu, Cheng Wang*, Shuting Chen, Xuesheng Bian, Baiqi Lai, Xuelun Shen, Ming Cheng, Shang Hong Lai, Dongdong Weng, Jonathan Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

Registering the 2D images (2D space) with the 3D model of the environment (3D space) provides a promising solution to outdoor Augmented Reality (AR) virtual-real registration. In this work, we use the position and orientation of the ground camera image to synthesize a corresponding rendered image from the outdoor large-scale 3D image-based point cloud. To achieve the virtual-real registration, we indirectly establish the spatial relationship between 2D and 3D space by matching the above two kinds (2D/3D space) of cross-domain images. However, matching cross-domain images goes beyond the capability of handcrafted descriptors and existing deep neural networks. To address this issue, we propose an end-to-end network, Y-Net, to learn Domain Robust Feature Representations (DRFRs) for the cross-domain images. Besides, we introduce a cross-domain-constrained loss function that balances the loss in image content and cross-domain consistency of the feature representations. Experimental results show that the DRFRs simultaneously preserve the representation of image content and suppress the influence of independent domains. Furthermore, Y-Net outperforms the existing algorithms on extracting feature representations and achieves state-of-the-art performance in cross-domain image retrieval. Finally, we validate the Y-Net-based registration approach on campus to demonstrate its possible applicability.

源语言英语
页(从-至)655-677
页数23
期刊Information Sciences
581
DOI
出版状态已出版 - 12月 2021

指纹

探究 'Y-Net: Learning Domain Robust Feature Representation for ground camera image and large-scale image-based point cloud registration' 的科研主题。它们共同构成独一无二的指纹。

引用此