Scene classification using local and global features with collaborative representation fusion

Jinyi Zou, Wei Li*, Chen Chen, Qian Du

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

166 Citations (Scopus)

Abstract

This paper presents an effective scene classification approach based on collaborative representation fusion of local and global spatial features. First, a visual word codebook is constructed by partitioning an image into dense regions, followed by the typical k-means clustering. A locality-constrained linear coding is employed on dense regions via the visual codebook, and a spatial pyramid matching strategy is then used to combine local features of the entire image. For global feature extraction, the method called multiscale completed local binary patterns (MS-CLBP) is applied to both the original gray scale image and its Gabor feature images. Finally, kernel collaborative representation-based classification (KCRC) is employed on the extracted local and global features, and class label of the testing image is assigned according to the minimal approximation residual after fusion. The proposed method is evaluated by using four commonly-used datasets including two remote sensing images datasets, an indoor and outdoor scenes dataset, and a sports action dataset. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art methods.

Original languageEnglish
Pages (from-to)209-226
Number of pages18
JournalInformation Sciences
Volume348
DOIs
Publication statusPublished - 20 Jun 2016
Externally publishedYes

Keywords

  • Collaborative representation-based classification
  • Locality-constrained linear coding
  • Scene classification
  • Spatial pyramid matching

Fingerprint

Dive into the research topics of 'Scene classification using local and global features with collaborative representation fusion'. Together they form a unique fingerprint.

Cite this