Large-scale image retrieval based on a fusion of gravity aware orientation information

Yun Chao Zhang, Jing Chen*, Yong Tian Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Large scale image retrieval has focused on effective feature coding and efficient searching. Vector of locally aggregated descriptors (VLAD) has achieved great retrieval performance as with its exact coding method and relatively low dimension. However, orientation information of features is ignored in coding step and feature dimension is not suitable for large scale image retrieval. In this paper, a gravity-aware oriented coding and oriented product quantization method based on traditional VLAD framework is proposed, which is efficient and effective. In feature coding step, gravity sensors built-in the mobile devices can be used for feature coding as with orientation information. In vector indexing step, oriented product quantization which combines orientation bins and product quantization bins is used for approximate nearest neighborhood search. Our method can be adapted to any popular retrieval frameworks, including bag-of-words and its variants. Experimental results on collected GPS and gravity-tagged Beijing landmark dataset, Holidays dataset and SUN397 dataset demonstrate that the approach can make full use of the similarity of matching pairs in descriptor space as well as in geometric space and improve the mobile visual search accuracy a lot when compared with VLAD and CVLAD.

Original languageEnglish
Pages (from-to)1501-1511
Number of pages11
JournalZidonghua Xuebao/Acta Automatica Sinica
Volume42
Issue number10
DOIs
Publication statusPublished - 1 Oct 2016

Keywords

  • Gravity information
  • Large scale image retrieval
  • Oriented coding
  • Oriented product quantization

Fingerprint

Dive into the research topics of 'Large-scale image retrieval based on a fusion of gravity aware orientation information'. Together they form a unique fingerprint.

Cite this