TY - GEN
T1 - Multimodal-Temporal Fusion
T2 - 39th IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019
AU - Liu, Xun
AU - Deng, Chenwei
AU - Zhao, Baojun
AU - Chanussot, Jocelyn
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - This paper aims to tackle a general but interesting cross-modality problem in remote sensing community: can multimodal images help to generate synthetic images in time series and improve temporal resolution? To this end, we explore multimodal-temporal fusion, in which we attempt to leverage the availability of additional cross-modality images to simulate the missing images in time series. We propose a multimodal-temporal fusion framework, and mainly focus on two kinds of information for the simulation: intra-modal cross-modality information and inter-modal temporal information. To exploit the cross-modality information, we adopt available paired images and learn a mapping between different modality images using a deep neural network. Considering temporal dependency among time-series images, we formulate a temporal constraint in the learning to encourage temporal consistent results. Experiments are conducted on two cross-modality image simulation applications (SAR to visible and visible to SWIR), and both visual and quantitative results demonstrate that the proposed model can successfully simulate missing images with cross-modality data.
AB - This paper aims to tackle a general but interesting cross-modality problem in remote sensing community: can multimodal images help to generate synthetic images in time series and improve temporal resolution? To this end, we explore multimodal-temporal fusion, in which we attempt to leverage the availability of additional cross-modality images to simulate the missing images in time series. We propose a multimodal-temporal fusion framework, and mainly focus on two kinds of information for the simulation: intra-modal cross-modality information and inter-modal temporal information. To exploit the cross-modality information, we adopt available paired images and learn a mapping between different modality images using a deep neural network. Considering temporal dependency among time-series images, we formulate a temporal constraint in the learning to encourage temporal consistent results. Experiments are conducted on two cross-modality image simulation applications (SAR to visible and visible to SWIR), and both visual and quantitative results demonstrate that the proposed model can successfully simulate missing images with cross-modality data.
KW - Cross-modality Image Translation
KW - Deep Neural Networks
KW - Image Time Series
KW - Multimodal-Temporal Fusion
KW - Temporal Resolution
UR - http://www.scopus.com/inward/record.url?scp=85077690431&partnerID=8YFLogxK
U2 - 10.1109/IGARSS.2019.8898453
DO - 10.1109/IGARSS.2019.8898453
M3 - Conference contribution
AN - SCOPUS:85077690431
T3 - International Geoscience and Remote Sensing Symposium (IGARSS)
SP - 10083
EP - 10086
BT - 2019 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 July 2019 through 2 August 2019
ER -