Abstract
As more remotely sensed data sources become available, how to efficiently exploit useful information from multisource data for Earth observation is an interesting but challenging problem. In this book chapter, classification fusion of hyperspectral imagery (HSI) and light detection and ranging (LiDAR) data is investigated. Specifically, recent deep-learning-based joint classification methods are discussed, such as two-branch convolution neural network (CNN), hierarchical random walk network (HRWN), and residual network-based probability reconstruction fusion (RNPRF). In the two-branch CNN, a two-tunnel CNN is employed to extract spectral-spatial features from HSI, and the CNN with cascade block is designed for feature extraction from LiDAR; in the feature fusion stage, the spatial and spectral features of HSI are integrated in a dual-tunnel branch for classification fusion, and then combined with other data features extracted from a cascade network. In the HRWN, a hierarchical random walk layer is designed to exploit spatial constraint and local seed guidance into a deeper layer of CNN. In the RNPRF, the deep features of each source are input to a softmax classifier to obtain the probability matrices, and then the probability matrices are fused by weighted summation to produce the final label assignment. Experimental results using the popular Houston scene with both HSI and LiDAR data demonstrate the effectiveness of these deep-learning-based methods.
Original language | English |
---|---|
Title of host publication | Advances in Hyperspectral Image Processing Techniques |
Publisher | Wiley-Blackwell |
Pages | 281-292 |
Number of pages | 12 |
ISBN (Print) | 9781119687788 |
DOIs | |
Publication status | Published - 11 Nov 2022 |
Keywords
- Data fusion
- Deep learning
- Feature extraction
- Hyperspectral imagery
- LiDAR data