UDSH: An Unsupervised Deep Image Stitching and De-Occlusion Method for Heavy Occlusion Scene

  • Kaixin Chen
  • , Hao Li
  • , Rundong Sun
  • , Yi Yang*
  • , Mengyin Fu
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Image stitching in heavy occlusion scenarios faces the dual challenges of accurate alignment and occlusion removal. On one hand, occlusion causes the loss of key texture and structural information in the image. On the other hand, it affects the image's integrity. Existing stitching methods perform well in cases with small occlusion coverage, but they often fail in heavy occlusion. This failure is mainly due to three reasons: 1) they cannot identify occluded regions, 2) they cannot suppress interference from the occluded regions, 3) they cannot remove the occluded regions. To address these issues, we propose an unsupervised deep image stitching and de-occlusion method. First, to solve the issue of occluded region identification, we design an Occlusion-Aware Feature Weighted module (OAFW) that explicitly distinguishes between occluded and non-occluded regions by learning the occlusion masks of the images. Second, to address the issue of interference from occlusion, we use the learned occlusion masks to filter out features from the occluded regions. To further suppress the impact of occlusion-induced errors, we design a Mask-Guided Dual-Granularity Alignment loss function (MGDGA) that only calculates alignment errors for non-occluded regions, effectively reducing occlusion error interference during network training. Finally, to resolve the content gap in the occluded regions, we replace the pixels in the occluded areas with those from the aligned overlapping regions and incorporate a Progressive Content Inpainting module (PCI) to recover the missing content in the non-overlapping regions caused by occlusion, ultimately achieving a complete and natural de-occlusion stitched image. Experimental results show that our method improves the mean squared error metric by 17.45% compared to the state-of-the-art stitching method.

Original languageEnglish
Title of host publicationIROS 2025 - 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, Conference Proceedings
EditorsChristian Laugier, Alessandro Renzaglia, Nikolay Atanasov, Stan Birchfield, Grzegorz Cielniak, Leonardo De Mattos, Laura Fiorini, Philippe Giguere, Kenji Hashimoto, Javier Ibanez-Guzman, Tetsushi Kamegawa, Jinoh Lee, Giuseppe Loianno, Kevin Luck, Hisataka Maruyama, Philippe Martinet, Hadi Moradi, Urbano Nunes, Julien Pettre, Alberto Pretto, Tommaso Ranzani, Arne Ronnau, Silvia Rossi, Elliott Rouse, Fabio Ruggiero, Olivier Simonin, Danwei Wang, Ming Yang, Eiichi Yoshida, Huijing Zhao
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6149-6155
Number of pages7
ISBN (Electronic)9798331543938
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2025 - Hangzhou, China
Duration: 19 Oct 202525 Oct 2025

Publication series

NameIEEE International Conference on Intelligent Robots and Systems
ISSN (Print)2153-0858
ISSN (Electronic)2153-0866

Conference

Conference2025 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2025
Country/TerritoryChina
CityHangzhou
Period19/10/2525/10/25

Fingerprint

Dive into the research topics of 'UDSH: An Unsupervised Deep Image Stitching and De-Occlusion Method for Heavy Occlusion Scene'. Together they form a unique fingerprint.

Cite this