CompleteDT: Point cloud completion with information-perception transformers

Jun Li, Shangwei Guo, Luhan Wang, Shaokun Han*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

In this work, we propose a novel point cloud completion network, called CompleteDT. To fully capture the 3D geometric structure of point clouds, we introduce an Information-Perception Transformer (IPT) that can simultaneously capture local features and global geometric relations. CompleteDT comprises a Feature Encoder, Query Generator, and Query Decoder. Feature Encoder extracts local features from multi-resolution point clouds to capture intricate geometrical structures. Query Generator uses the proposed IPT, utilizing the Point Local Attention (PLA) and Point Global Attention (PGA) modules, to learn local features and global correlations, and generate query features that represent predicted point clouds. The PLA captures local information within local points by adaptively measuring weights of neighboring points, while PGA adapts multi-head self-attention by transforming it into a layer-by-layer form where each head learns global features in a high-dimensional space of different dimensions. By dense connections, the module allows for direct information exchange between each head and facilitates the capture of long global correlations. By combining the strengths of both PLA and PGA, the IPT can fully leverage local and global features to facilitate CompleteDT to complete point clouds. Lastly, the query features undergo refining to generate a complete point cloud through the Query Decoder. Our experimental results demonstrate that CompleteDT outperforms current state-of-the-art methods, effectively learning from incomplete inputs and predicting complete outputs.

源语言英语
文章编号127790
期刊Neurocomputing
592
DOI
出版状态已出版 - 1 8月 2024

指纹

探究 'CompleteDT: Point cloud completion with information-perception transformers' 的科研主题。它们共同构成独一无二的指纹。

引用此