TY - JOUR
T1 - Deep Learning in Remote Sensing Image Fusion
T2 - Methods, protocols, data, and future perspectives
AU - Vivone, Gemine
AU - Deng, Liang Jian
AU - Deng, Shangqi
AU - Hong, Danfeng
AU - Jiang, Menghui
AU - Li, Chenyu
AU - Li, Wei
AU - Shen, Huanfeng
AU - Wu, Xiao
AU - Xiao, Jin Liang
AU - Yao, Jing
AU - Zhang, Mengmeng
AU - Chanussot, Jocelyn
AU - Garcia, Salvador
AU - Plaza, Antonio
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Image fusion can be conducted at different levels, with pixel-level image fusion involving the direct combination of original information from source images. The objective of methods falling under this category is to generate a fused image that enhances both visual perception and subsequent processing tasks. This survey article draws upon research findings in pixel-level image fusion for remote sensing, outlining primary research directions, such as image sharpening, multimodal image fusion, and spatiotemporal image fusion. For each area, state-of-the-art deep learning (DL) solutions are deeply reviewed. Furthermore, this article discusses open issues and potential future directions. It also examines common downstream image fusion tasks to underscore how they can benefit from image fusion techniques to achieve improved performance. This article aims to extend beyond a conventional survey by not only reviewing existing methodologies but also providing practical insights, such as assessment protocols, available datasets for training and testing DL models, and guidelines for DL remote sensing image fusion. This article is geared toward students and professionals who want to approach pixel-level image fusion in remote sensing, offering valuable cues and tools for addressing specific challenges. The authors wish this work to contribute to reducing barriers to entry for interested scientists in adjacent research fields and aid the growth of a new generation of image fusion researchers.
AB - Image fusion can be conducted at different levels, with pixel-level image fusion involving the direct combination of original information from source images. The objective of methods falling under this category is to generate a fused image that enhances both visual perception and subsequent processing tasks. This survey article draws upon research findings in pixel-level image fusion for remote sensing, outlining primary research directions, such as image sharpening, multimodal image fusion, and spatiotemporal image fusion. For each area, state-of-the-art deep learning (DL) solutions are deeply reviewed. Furthermore, this article discusses open issues and potential future directions. It also examines common downstream image fusion tasks to underscore how they can benefit from image fusion techniques to achieve improved performance. This article aims to extend beyond a conventional survey by not only reviewing existing methodologies but also providing practical insights, such as assessment protocols, available datasets for training and testing DL models, and guidelines for DL remote sensing image fusion. This article is geared toward students and professionals who want to approach pixel-level image fusion in remote sensing, offering valuable cues and tools for addressing specific challenges. The authors wish this work to contribute to reducing barriers to entry for interested scientists in adjacent research fields and aid the growth of a new generation of image fusion researchers.
UR - http://www.scopus.com/inward/record.url?scp=105003026737&partnerID=8YFLogxK
U2 - 10.1109/MGRS.2024.3495516
DO - 10.1109/MGRS.2024.3495516
M3 - Article
AN - SCOPUS:105003026737
SN - 2473-2397
VL - 13
SP - 269
EP - 310
JO - IEEE Geoscience and Remote Sensing Magazine
JF - IEEE Geoscience and Remote Sensing Magazine
IS - 1
ER -