Deformable 3D fusion: From partial dynamic 3D observations to complete 4D models

Weipeng Xu, Mathieu Salzmann, Yongtian Wang, Yue Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Citations (Scopus)

Abstract

Capturing the 3D motion of dynamic, non-rigid objects has attracted significant attention in computer vision. Existing methods typically require either complete 3D volumetric observations, or a shape template. In this paper, we introduce a template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporally-coherent shape representation of the object. To this end, we design an online algorithm that alternatively registers new observations to the current model estimate and updates the model. We demonstrate the effectiveness of our approach at reconstructing non-rigidly moving objects from highly-incomplete measurements on both sequences of partial 3D point clouds and Kinect videos.

Original languageEnglish
Title of host publication2015 International Conference on Computer Vision, ICCV 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2183-2191
Number of pages9
ISBN (Electronic)9781467383912
DOIs
Publication statusPublished - 17 Feb 2015
Event15th IEEE International Conference on Computer Vision, ICCV 2015 - Santiago, Chile
Duration: 11 Dec 201518 Dec 2015

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2015 International Conference on Computer Vision, ICCV 2015
ISSN (Print)1550-5499

Conference

Conference15th IEEE International Conference on Computer Vision, ICCV 2015
Country/TerritoryChile
CitySantiago
Period11/12/1518/12/15

Fingerprint

Dive into the research topics of 'Deformable 3D fusion: From partial dynamic 3D observations to complete 4D models'. Together they form a unique fingerprint.

Cite this