Abstract
Large field-of-view images are increasingly used in various environments today, and image stitching technology can make up for the limited field of view caused by hardware design. However, previous methods are constrained in various environments. In this paper, we propose a method that combines the powerful feature extraction capabilities of the Superpoint algorithm and the exact feature matching capabilities of the Lightglue algorithm with the image fusion algorithm of Unsupervised Deep Image Stitching (UDIS). Our proposed method effectively improves the situation where the linear structure is distorted and the resolution is low in the stitching results of the UDIS algorithm. On this basis, we make up for the shortcomings of the UDIS fusion algorithm. For stitching fractures of UDIS in some complex situations, we optimize the loss function of UDIS. We use a second-order differential Laplacian operator to replace the difference in the horizontal and vertical directions to emphasize the continuity of the structural edges during training. Combined with the above improvements, the Super Unsupervised Deep Image Stitching (SuperUDIS) algorithm is finally formed. SuperUDIS has better performance in both qualitative and quantitative evaluations compared to the UDIS algorithm, with the PSNR index increasing by 0.5 on average and the SSIM index increasing by 0.02 on average. Moreover, the proposed method is more robust in complex environments with large color differences or multi-linear structures.
Original language | English |
---|---|
Article number | 5352 |
Journal | Sensors |
Volume | 24 |
Issue number | 16 |
DOIs | |
Publication status | Published - Aug 2024 |
Keywords
- chroma balance
- deep learning
- image stitching
- unsupervised stitching