Abstract
Polarization image fusion is a crucial component of polarization imaging applications. Most of the existing polarization fusion algorithms concentrate on fusing the intensity and the degree of linear polarization (DoLP). The information encoded in the angle of linear polarization (AoLP), such as surface orientation and illumination, is not introduced in existing fusion frameworks due to the noise-sensitive property, the π-ambiguity and diffuse/specular-ambiguity. To address this problem, we adopt a new polarization mapping paradigm as an alternative to improve feature utilization and information interpretability. A learning based polarization image fusion network is proposed to learn the potential features and recreate the intuitively understandable images. Four public polarization datasets are introduced in the experiments. The linear polarization information was effectively fused by the proposed method. The noise and distortion introduced by DoLP and AoLP are suppressed meanwhile. According to the evaluation and analysis, it found that the fused images acquired by the proposed method outperform the state-of-the-art methods in the aspects of target surface orientation representation, low-illumination object recognition, and texture enhancement.
Original language | English |
---|---|
Article number | 109969 |
Journal | Optics and Laser Technology |
Volume | 168 |
DOIs | |
Publication status | Published - Jan 2024 |
Keywords
- Angle of linear polarization
- Deep learning
- Image fusion
- Polarization image