Iterative Network for Disparity Prediction with Infrared and Visible Light Images Based on Common Features

Ziang Zhang, Li Li*, Weiqi Jin, Zanxi Qu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, the range of applications that utilize multiband imaging has significantly expanded. However, it is difficult to utilize multichannel heterogeneous images to achieve a spectral complementarity advantage and obtain accurate depth prediction based on traditional systems. In this study, we investigate CFNet, an iterative prediction network, for disparity prediction with infrared and visible light images based on common features. CFNet consists of several components, including a common feature extraction subnetwork, context subnetwork, multimodal information acquisition subnetwork, and a cascaded convolutional gated recurrent subnetwork. It leverages the advantages of dual-band (infrared and visible light) imaging, considering semantic information, geometric structure, and local matching details within images to predict the disparity between heterogeneous image pairs accurately. CFNet demonstrates superior performance in recognized evaluation metrics and visual image observations when compared with other publicly available networks, offering an effective technical approach for practical heterogeneous image disparity prediction.

Original languageEnglish
Article number196
JournalSensors
Volume24
Issue number1
DOIs
Publication statusPublished - Jan 2024

Keywords

  • binocular stereo vision
  • common features
  • disparity prediction
  • multiband imaging

Fingerprint

Dive into the research topics of 'Iterative Network for Disparity Prediction with Infrared and Visible Light Images Based on Common Features'. Together they form a unique fingerprint.

Cite this