Weakly-Supervised Single-view Dense 3D Point Cloud Reconstruction via Differentiable Renderer

Peng Jin, Shaoli Liu, Jianhua Liu*, Hao Huang, Linlin Yang, Michael Weinmann, Reinhard Klein

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

8 引用 (Scopus)

摘要

In recent years, addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention. In this paper, we focus on complete three-dimensional (3D) point cloud reconstruction based on a single red-green-blue (RGB) image, a task that cannot be approached using classical reconstruction techniques. For this purpose, we used an encoder-decoder framework to encode the RGB information in latent space, and to predict the 3D structure of the considered object from different viewpoints. The individual predictions are combined to yield a common representation that is used in a module combining camera pose estimation and rendering, thereby achieving differentiability with respect to imaging process and the camera pose, and optimization of the two-dimensional prediction error of novel viewpoints. Thus, our method allows end-to-end training and does not require supervision based on additional ground-truth (GT) mask annotations or ground-truth camera pose annotations. Our evaluation of synthetic and real-world data demonstrates the robustness of our approach to appearance changes and self-occlusions, through outperformance of current state-of-the-art methods in terms of accuracy, density, and model completeness.

源语言英语
文章编号93
期刊Chinese Journal of Mechanical Engineering (English Edition)
34
1
DOI
出版状态已出版 - 12月 2021

指纹

探究 'Weakly-Supervised Single-view Dense 3D Point Cloud Reconstruction via Differentiable Renderer' 的科研主题。它们共同构成独一无二的指纹。

引用此