Deep Cross-Modal Proxy Hashing

Rong Cheng Tu, Xian Ling Mao*, Rong Xin Tu, Binbin Bian, Chengfei Cai, Hongfa Wang, Wei Wei, Heyan Huang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

17 引用 (Scopus)

摘要

Due to the high retrieval efficiency and low storage cost for cross-modal search tasks, cross-modal hashing methods have attracted considerable attention from the researchers. For the supervised cross-modal hashing methods, how to make the learned hash codes sufficiently preserve semantic information contained in the label of datapoints is the key to further enhance the retrieval performance. Hence, almost all supervised cross-modal hashing methods usually depend on defining similarities between datapoints with the label information to guide the hashing model learning fully or partly. However, the defined similarity between datapoints can only capture the label information of datapoints partially and misses abundant semantic information, which then hinders the further improvement of retrieval performance. Thus, in this paper, different from previous works, we propose a novel cross-modal hashing method without defining the similarity between datapoints, called Deep Cross-modal Proxy Hashing (DCPH). Specifically, DCPH first trains a proxy hashing network to transform each category information of a dataset into a semantic discriminative hash code, called proxy hash code. Each proxy hash code can preserve the semantic information of its corresponding category well. Next, without defining the similarity between datapoints to supervise the training process of the modality-specific hashing networks, we propose a novel margin-dynamic-softmax loss to directly utilize the proxy hashing codes as supervised information. Finally, by minimizing the novel margin-dynamic-softmax loss, the modality-specific hashing networks can be trained to generate hash codes that can simultaneously preserve the cross-modal similarity and abundant semantic information well. Extensive experiments on three benchmark datasets show that the proposed method outperforms the state-of-the-art baselines in the cross-modal retrieval tasks.

源语言英语
页(从-至)6798-6810
页数13
期刊IEEE Transactions on Knowledge and Data Engineering
35
7
DOI
出版状态已出版 - 1 7月 2023

指纹

探究 'Deep Cross-Modal Proxy Hashing' 的科研主题。它们共同构成独一无二的指纹。

引用此