Improving the Generalization of Visual Classification Models Across IoT Cameras via Cross-Modal Inference and Fusion

Qing Ling Guan, Yuze Zheng, Lei Meng*, Li Quan Dong, Qun Hao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)

Abstract

The performance of visual classification models across Internet of Things devices is usually limited by the changes in local environments, resulted from the diverse appearances of the target objects and differences in light conditions and background scenes. To alleviate these problems, existing studies usually introduce the multimodal information to guide the learning process of the visual classification models, making the models extract the visual features from the discriminative image regions. Especially, cross-modal alignment between visual and textual features has been considered as an effective way for this task by learning a domain-consistent latent feature space for the visual and semantic features. However, this approach may suffer from the heterogeneity between multiple modalities, such as the multimodal features and the differences in the learned feature values. To alleviate this problem, this article first presents a comparative analysis of the functionality of various alignment strategies and their impacts on improving visual classification. Subsequently, a cross-modal inference and fusion framework (termed as CRIF) is proposed to align the heterogeneous features in both the feature distributions and values. More importantly, CRIF includes a cross-modal information enrichment module to improve the final classification and learn the mappings from the visual to the semantic space. We conduct experiments on four benchmarking data sets, i.e., the Vireo-Food172, NUS-WIDE, MSR-VTT, and ActivityNet Captions data sets. We report state-of-the-art results for basic classification tasks on the four data sets and conduct subsequent experiments on feature alignment and fusion. The experimental results verify that CRIF can effectively improve the learning ability of the visual classification models, and it is a model-agnostic framework that consistently improves the performance of state-of-the-art visual classification models.

Original languageEnglish
Pages (from-to)15835-15846
Number of pages12
JournalIEEE Internet of Things Journal
Volume10
Issue number18
DOIs
Publication statusPublished - 15 Sept 2023

Keywords

  • Feature alignment
  • heterogeneous domain
  • image classification
  • semantic inference

Fingerprint

Dive into the research topics of 'Improving the Generalization of Visual Classification Models Across IoT Cameras via Cross-Modal Inference and Fusion'. Together they form a unique fingerprint.

Cite this