Doubled coupling for image emotion distribution learning

Huiyan Wu, Yonggang Huang*, Guoshun Nan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Image emotion prediction has a great impact on wide applications, such as social network analysis, advertising, and human–computer interaction. Recently, image emotion distribution learning (IEDL) has attracted increasing attention as it holds the potential to tackle the challenging emotion ambiguity problem for image emotion prediction. Existing efforts focus more on the emotion distribution learning with the assumption of independently identically distribution. However, we observe that the connections between objects in an image (e.g., butterfly and flower) and the connections between different images (e.g., the images taken in the same place), commonly exist in real-world datasets. Coupling information has been proved greatly helpful for many tasks, and also is crucial for image emotion analysis. Such observations motivate us to explore the above two coupling relations for better IEDL. With this in mind, we propose DoubledIEDL, a novel IEDL approach that consists of two sub-modules for object and image coupling learning, respectively. Specifically, our IEDL relies on a unified framework equipped with densely connected graph convolutional networks (DCGCN) for both coupling learning. The learning of our proposed framework has two stages: static stage and dynamic stage. In the first stage, a static graph is constructed to extract the shallow coupling information with DCGCN. Then, in the second stage, the deep coupling information is further mined via DCGCN on dynamically updated graphs in an iterative manner. The sub-modules for object and image coupling learning share this framework, but differ in the static graph constructing strategy. Extensive experiments on the two public benchmarks, FlickrLDL and TwitterLDL, demonstrate the effectiveness of the proposed DoubledIEDL, yielding significant improvement against previous state-of-the-art models. On FlickrLDL, CoupledIEDL achieves 0.8596 in Cosine and 0.4356 in Kullback–Leibler Divergence (K–L). On TwitterLDL, CoupledIEDL achieves 0.8717 in Cosine and 0.4705 in K–L.

Original languageEnglish
Article number110107
JournalKnowledge-Based Systems
Volume260
DOIs
Publication statusPublished - 25 Jan 2023

Keywords

  • DCGCN
  • Dynamic iteration
  • Image coupling
  • Image emotion distribution
  • Object coupling

Fingerprint

Dive into the research topics of 'Doubled coupling for image emotion distribution learning'. Together they form a unique fingerprint.

Cite this