TY - GEN
T1 - A joint model for text and image semantic feature extraction
AU - Cao, Jiarun
AU - Wang, Chongwen
AU - Gao, Liming
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/12/21
Y1 - 2018/12/21
N2 - Most of the current information retrieval are based on keyword information appearing in the text or statistical information according to the number of vocabulary words. It is also possible to add additional semantic information by using synonyms, polysemous words, etc. to increase the accuracy of similarity and screening. However, in the current network, in addition to generate a large number of new words every day, pictures, audio, video and other information will appear too. Therefore, the manual features are difficult to express on this kind of newly appearing data, and the low-dimensional feature abstraction is very difficult to represent the overall semantics of text and images. In this paper, we propose a semantic feature extraction algorithm based on deep network, which applies the local attention mechanism to the feature generation model of pictures and texts. The retrieval of text and image information is converted into the similarity calculation of the vector, which improves the retrieval speed and ensures the semantic relevance of the result. Through the compilation of many years of news text and image data to complete the training and testing of text and image feature extraction models, the results show that the depth feature model has great advantages in semantic expression and feature extraction. On the other hand, add the similarity calculation to the training processing also improve the retrieval accuracy.
AB - Most of the current information retrieval are based on keyword information appearing in the text or statistical information according to the number of vocabulary words. It is also possible to add additional semantic information by using synonyms, polysemous words, etc. to increase the accuracy of similarity and screening. However, in the current network, in addition to generate a large number of new words every day, pictures, audio, video and other information will appear too. Therefore, the manual features are difficult to express on this kind of newly appearing data, and the low-dimensional feature abstraction is very difficult to represent the overall semantics of text and images. In this paper, we propose a semantic feature extraction algorithm based on deep network, which applies the local attention mechanism to the feature generation model of pictures and texts. The retrieval of text and image information is converted into the similarity calculation of the vector, which improves the retrieval speed and ensures the semantic relevance of the result. Through the compilation of many years of news text and image data to complete the training and testing of text and image feature extraction models, the results show that the depth feature model has great advantages in semantic expression and feature extraction. On the other hand, add the similarity calculation to the training processing also improve the retrieval accuracy.
KW - Information retrieval
KW - Natural language processing
KW - Similarity Calculation
UR - http://www.scopus.com/inward/record.url?scp=85061894642&partnerID=8YFLogxK
U2 - 10.1145/3302425.3302437
DO - 10.1145/3302425.3302437
M3 - Conference contribution
AN - SCOPUS:85061894642
T3 - ACM International Conference Proceeding Series
BT - ACAI 2018 Conference Proceeding - 2018 International Conference on Algorithms, Computing and Artificial Intelligence
PB - Association for Computing Machinery
T2 - 2018 International Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2018
Y2 - 21 December 2018 through 23 December 2018
ER -