Study on comfort prediction of stereoscopic images based on improved saliency detection

Minghan Du, Guangyu Nie, Yue Liu*, Yongtian Wang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper proposed a saliency-dependent measure to predict visual comfort of stereoscopic. Considering the drawbacks of traditional stereoscopic display visual comfort assessment, a more accurate visual comfort prediction method based on improved stereoscopic saliency detection algorithm was proposed in this paper. The proposed approach includes 3 steps. The first step involves the calculation of region contrast, background prior, surface orientation prior and depth prior which aims to generate a stereoscopic saliency map. The second step is the extraction of visual comfort perception features. Finally, the prediction performance is evaluated by using SVR. The experimental results demonstrate that our method improves the prediction accuracy a lot compared with the related work.

Original languageEnglish
Title of host publicationImage and Graphics Technologies and Applications - 13th Conference on Image and Graphics Technologies and Applications, IGTA 2018, Revised Selected Papers
EditorsYongtian Wang, Yuxin Peng, Zhiguo Jiang
PublisherSpringer Verlag
Pages451-460
Number of pages10
ISBN (Print)9789811317019
DOIs
Publication statusPublished - 2018
Event13th Conference on Image and Graphics Technologies and Applications, IGTA 2018 - Beijing, China
Duration: 8 Apr 201810 Apr 2018

Publication series

NameCommunications in Computer and Information Science
Volume875
ISSN (Print)1865-0929

Conference

Conference13th Conference on Image and Graphics Technologies and Applications, IGTA 2018
Country/TerritoryChina
CityBeijing
Period8/04/1810/04/18

Keywords

  • Assessment system
  • Saliency detection
  • Stereoscopic display
  • Visual comfort

Fingerprint

Dive into the research topics of 'Study on comfort prediction of stereoscopic images based on improved saliency detection'. Together they form a unique fingerprint.

Cite this