TY - JOUR
T1 - Minimizing Reconstruction Bias Hashing via Joint Projection Learning and Quantization
AU - Duan, Ling Yu
AU - Wu, Yuwei
AU - Huang, Yicheng
AU - Wang, Zhe
AU - Yuan, Junsong
AU - Gao, Wen
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2018/6
Y1 - 2018/6
N2 - Hashing, a widely studied solution to the approximate nearest neighbor search, aims to map data points in the high-dimensional Euclidean space to the low-dimensional Hamming space while preserving the similarity between original points. As directly learning binary codes can be NP-hard due to discrete constraints, a two-stage scheme, namely, 'projection and quantization', has already become a standard paradigm for learning similarity-preserving hash codes. However, most existing hashing methods typically separate these two stages and thus fail to investigate complementary effects of both stages. In this paper, we systematically study the relationship between 'projection and quantization', and propose a novel minimal reconstruction bias hashing (MRH) method to learn compact binary codes, in which the projection learning and quantization optimizing are jointly performed. By introducing a lower bound analysis, we design an effective ternary search algorithm to solve the corresponding optimization problem. Furthermore, we conduct some insightful discussions on the proposed MRH approach, including the theoretical proof, and computational complexity. Distinct from previous works, the MRH can adaptively adjust the projection dimensionality to balance the information loss between the projection and quantization. The proposed framework not only provides a unique perspective to view traditional hashing methods, but also evokes some other researches, e.g., guiding the design of the loss functions in deep networks. Extensive experiment results have shown that the proposed MRH significantly outperforms a variety of state-of-the-art methods over eight widely used benchmarks.
AB - Hashing, a widely studied solution to the approximate nearest neighbor search, aims to map data points in the high-dimensional Euclidean space to the low-dimensional Hamming space while preserving the similarity between original points. As directly learning binary codes can be NP-hard due to discrete constraints, a two-stage scheme, namely, 'projection and quantization', has already become a standard paradigm for learning similarity-preserving hash codes. However, most existing hashing methods typically separate these two stages and thus fail to investigate complementary effects of both stages. In this paper, we systematically study the relationship between 'projection and quantization', and propose a novel minimal reconstruction bias hashing (MRH) method to learn compact binary codes, in which the projection learning and quantization optimizing are jointly performed. By introducing a lower bound analysis, we design an effective ternary search algorithm to solve the corresponding optimization problem. Furthermore, we conduct some insightful discussions on the proposed MRH approach, including the theoretical proof, and computational complexity. Distinct from previous works, the MRH can adaptively adjust the projection dimensionality to balance the information loss between the projection and quantization. The proposed framework not only provides a unique perspective to view traditional hashing methods, but also evokes some other researches, e.g., guiding the design of the loss functions in deep networks. Extensive experiment results have shown that the proposed MRH significantly outperforms a variety of state-of-the-art methods over eight widely used benchmarks.
KW - Bias hashing
KW - image retrieval
KW - joint optimization
KW - quantization error
UR - https://www.scopus.com/pages/publications/85044333138
U2 - 10.1109/TIP.2018.2818008
DO - 10.1109/TIP.2018.2818008
M3 - Article
AN - SCOPUS:85044333138
SN - 1057-7149
VL - 27
SP - 3127
EP - 3141
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 6
ER -