Abstract
The visualization of synthetic aperture radar (SAR) images involves the mapping of high dynamic range (HDR) amplitude values to gray levels for lower dynamic range (LDR) display devices. This dynamic range compression process determines the visibility of details in the displayed result. It therefore plays a critical role in remote sensing applications. There are some problems with existing methods, such as poor adaptability, detail loss, imbalance between contrast improvement and noise suppression. To effectively obtain the images suitable for human observation and subsequent inter-pretation, we introduce a novel self-adaptive SAR image dynamic range compression method based on deep learning. Its designed objective is to present the maximal amount of information content in the displayed image and eliminate the contradiction between contrast and noise. Considering that, we propose a decomposition-fusion framework. The input SAR image is rescaled to a certain size and then put into a bilateral feature enhancement module to remap high and low frequency features to realize noise suppression and contrast enhancement. Based on the bilateral features, a feature fusion module is employed for feature integration and optimization to achieve a more precise reconstruction result. Visual and quantitative experiments on synthesized and real-world SAR images show that the proposed method notably realizes visualization which exceeds several statistical methods. It has good adaptability and can improve SAR images’ contrast for interpretation.
Original language | English |
---|---|
Article number | 2338 |
Journal | Remote Sensing |
Volume | 14 |
Issue number | 10 |
DOIs | |
Publication status | Published - 1 May 2022 |
Keywords
- deep learning
- dynamic range compression
- synthetic aperture radar (SAR)
- visualization