Improving the Generalization of Visual Classification Models Across IoT Cameras via Cross-Modal Inference and Fusion

Qing Ling Guan, Yuze Zheng, Lei Meng*, Li Quan Dong, Qun Hao

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

The performance of visual classification models across Internet of Things devices is usually limited by the changes in local environments, resulted from the diverse appearances of the target objects and differences in light conditions and background scenes. To alleviate these problems, existing studies usually introduce the multimodal information to guide the learning process of the visual classification models, making the models extract the visual features from the discriminative image regions. Especially, cross-modal alignment between visual and textual features has been considered as an effective way for this task by learning a domain-consistent latent feature space for the visual and semantic features. However, this approach may suffer from the heterogeneity between multiple modalities, such as the multimodal features and the differences in the learned feature values. To alleviate this problem, this article first presents a comparative analysis of the functionality of various alignment strategies and their impacts on improving visual classification. Subsequently, a cross-modal inference and fusion framework (termed as CRIF) is proposed to align the heterogeneous features in both the feature distributions and values. More importantly, CRIF includes a cross-modal information enrichment module to improve the final classification and learn the mappings from the visual to the semantic space. We conduct experiments on four benchmarking data sets, i.e., the Vireo-Food172, NUS-WIDE, MSR-VTT, and ActivityNet Captions data sets. We report state-of-the-art results for basic classification tasks on the four data sets and conduct subsequent experiments on feature alignment and fusion. The experimental results verify that CRIF can effectively improve the learning ability of the visual classification models, and it is a model-agnostic framework that consistently improves the performance of state-of-the-art visual classification models.

源语言英语
页(从-至)15835-15846
页数12
期刊IEEE Internet of Things Journal
10
18
DOI
出版状态已出版 - 15 9月 2023

指纹

探究 'Improving the Generalization of Visual Classification Models Across IoT Cameras via Cross-Modal Inference and Fusion' 的科研主题。它们共同构成独一无二的指纹。

引用此