Adaptive relevance feedback for fusion of text and visual features

Leszek Kaliciak, Hans Myrhaug, Ayse Goker, Dawei Song

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

It has been shown that query can be correlated with its context to a different extent; in this case the feedback images. We introduce an adaptive weighting scheme where the respective weights are automatically modified, depending on the relationship strength between visual query and its visual context and textual query and its textual context; the number of terms or visual terms (mid-level visual features) co-occurring between current query and its context. The user simulation experiment has shown that this kind of adaptation can indeed further improve the effectiveness of hybrid CBIR models.

Original languageEnglish
Title of host publication2015 18th International Conference on Information Fusion, Fusion 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1322-1329
Number of pages8
ISBN (Electronic)9780982443866
Publication statusPublished - 14 Sept 2015
Externally publishedYes
Event18th International Conference on Information Fusion, Fusion 2015 - Washington, United States
Duration: 6 Jul 20159 Jul 2015

Publication series

Name2015 18th International Conference on Information Fusion, Fusion 2015

Conference

Conference18th International Conference on Information Fusion, Fusion 2015
Country/TerritoryUnited States
CityWashington
Period6/07/159/07/15

Keywords

  • Adaptive Weighting Scheme
  • Early Fusion
  • Hybrid Relevance Feedback
  • Late Fusion
  • Re-Ranking
  • Textual Features
  • Visual Features

Fingerprint

Dive into the research topics of 'Adaptive relevance feedback for fusion of text and visual features'. Together they form a unique fingerprint.

Cite this