Double-Shot 3D Shape Measurement with a Dual-Branch Network for Structured Light Projection Profilometry

Mingyang Lei, Jingfan Fan*, Long Shao*, Hong Song, Deqiang Xiao, Danni Ai, Tianyu Fu, Yucong Lin, Ying Gu, Jian Yang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

The structured light (SL)-based three-dimensional (3D) measurement techniques with deep learning have been widely studied to improve measurement efficiency, among which fringe projection profilometry (FPP) and speckle projection profilometry (SPP) are two popular methods. However, they generally use a single projection pattern for reconstruction, resulting in fringe order ambiguity or poor reconstruction accuracy. To alleviate these problems, we propose a parallel dual-branch Convolutional Neural Network (CNN)-Transformer network (PDCNet), to take advantage of convolutional operations and self-attention mechanisms for processing different SL modalities. Within PDCNet, a Transformer branch is used to capture global perception in the fringe images, while a CNN branch is designed to collect local details in the speckle images. To fully integrate complementary features, we design a double-stream attention aggregation module (DAAM) that consists of a parallel attention subnetwork for aggregating multi-scale spatial structure information. This module can dynamically retain local and global representations to the maximum extent. Moreover, an adaptive mixture density head with bimodal Gaussian distribution is proposed for learning a representation that is precise near discontinuities. Compared to the standard disparity regression strategy, this adaptive mixture head can effectively improve performance at object boundaries. Extensive experiments demonstrate that our method can reduce fringe order ambiguity while producing high-accuracy results on self-made datasets.

指纹

探究 'Double-Shot 3D Shape Measurement with a Dual-Branch Network for Structured Light Projection Profilometry' 的科研主题。它们共同构成独一无二的指纹。

引用此