Abstract
Large scale image retrieval has focused on effective feature coding and efficient searching. Vector of locally aggregated descriptors (VLAD) has achieved great retrieval performance as with its exact coding method and relatively low dimension. However, orientation information of features is ignored in coding step and feature dimension is not suitable for large scale image retrieval. In this paper, a gravity-aware oriented coding and oriented product quantization method based on traditional VLAD framework is proposed, which is efficient and effective. In feature coding step, gravity sensors built-in the mobile devices can be used for feature coding as with orientation information. In vector indexing step, oriented product quantization which combines orientation bins and product quantization bins is used for approximate nearest neighborhood search. Our method can be adapted to any popular retrieval frameworks, including bag-of-words and its variants. Experimental results on collected GPS and gravity-tagged Beijing landmark dataset, Holidays dataset and SUN397 dataset demonstrate that the approach can make full use of the similarity of matching pairs in descriptor space as well as in geometric space and improve the mobile visual search accuracy a lot when compared with VLAD and CVLAD.
Original language | English |
---|---|
Pages (from-to) | 1501-1511 |
Number of pages | 11 |
Journal | Zidonghua Xuebao/Acta Automatica Sinica |
Volume | 42 |
Issue number | 10 |
DOIs | |
Publication status | Published - 1 Oct 2016 |
Keywords
- Gravity information
- Large scale image retrieval
- Oriented coding
- Oriented product quantization