Splatting-Based View Synthesis for Self-supervised Monocular Depth Estimation

Jiahao Liu, Jianghao Leng, Bo Liu, Wenyi Huang, Chao Sun*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Self-supervised method has shown great potential in monocular depth estimation, since it does not need expensive ground-truth depth labels but only uses the photometric error of synthesized images as the supervision signal. However, although many methods have been proposed to improve its performance, the occlusion problem has not been clearly handled. This paper introduces a novel view synthesis module to deal with occluded pixels in the process of image reconstruction. Specifically, we use bilinear splatting to forward warp the source image, and average pixels projected to the same location by the predicted depth. In addition, a valid pixel mask is generated with projection to ignore invalid pixels. The proposed approach clearly handles overlapping pixels and invalid areas of the synthesized image, thus improving the performance of self-supervised learning. We conduct various experiments, and the results show that our model can generate clear and complete depth maps and achieves state-of-the-art performance.

源语言英语
主期刊名2023 9th International Conference on Electrical Engineering, Control and Robotics, EECR 2023
出版商Institute of Electrical and Electronics Engineers Inc.
274-279
页数6
ISBN(电子版)9781665491204
DOI
出版状态已出版 - 2023
活动9th International Conference on Electrical Engineering, Control and Robotics, EECR 2023 - Wuhan, 中国
期限: 24 2月 202326 2月 2023

出版系列

姓名2023 9th International Conference on Electrical Engineering, Control and Robotics, EECR 2023

会议

会议9th International Conference on Electrical Engineering, Control and Robotics, EECR 2023
国家/地区中国
Wuhan
时期24/02/2326/02/23

指纹

探究 'Splatting-Based View Synthesis for Self-supervised Monocular Depth Estimation' 的科研主题。它们共同构成独一无二的指纹。

引用此