CompleteDT: Point cloud completion with information-perception transformers

Jun Li, Shangwei Guo, Luhan Wang, Shaokun Han*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

In this work, we propose a novel point cloud completion network, called CompleteDT. To fully capture the 3D geometric structure of point clouds, we introduce an Information-Perception Transformer (IPT) that can simultaneously capture local features and global geometric relations. CompleteDT comprises a Feature Encoder, Query Generator, and Query Decoder. Feature Encoder extracts local features from multi-resolution point clouds to capture intricate geometrical structures. Query Generator uses the proposed IPT, utilizing the Point Local Attention (PLA) and Point Global Attention (PGA) modules, to learn local features and global correlations, and generate query features that represent predicted point clouds. The PLA captures local information within local points by adaptively measuring weights of neighboring points, while PGA adapts multi-head self-attention by transforming it into a layer-by-layer form where each head learns global features in a high-dimensional space of different dimensions. By dense connections, the module allows for direct information exchange between each head and facilitates the capture of long global correlations. By combining the strengths of both PLA and PGA, the IPT can fully leverage local and global features to facilitate CompleteDT to complete point clouds. Lastly, the query features undergo refining to generate a complete point cloud through the Query Decoder. Our experimental results demonstrate that CompleteDT outperforms current state-of-the-art methods, effectively learning from incomplete inputs and predicting complete outputs.

Original languageEnglish
Article number127790
JournalNeurocomputing
Volume592
DOIs
Publication statusPublished - 1 Aug 2024

Keywords

  • 3D point cloud
  • 3D reconstruction
  • Point cloud completion
  • Transformer

Fingerprint

Dive into the research topics of 'CompleteDT: Point cloud completion with information-perception transformers'. Together they form a unique fingerprint.

Cite this

Li, J., Guo, S., Wang, L., & Han, S. (2024). CompleteDT: Point cloud completion with information-perception transformers. Neurocomputing, 592, Article 127790. https://doi.org/10.1016/j.neucom.2024.127790