Robust Stereoscopic Crosstalk Prediction

Jianbing Shen, Yan Zhang, Zhiyuan Liang, Chang Liu, Hanqiu Sun, Xiaopeng Hao, Jianhong Liu, Jian Yang, Ling Shao

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

We propose a new metric to predict perceived crosstalk using the original images rather than both the original and ghosted images. The proposed metrics are based on color information. First, we extract a disparity map, a color difference map, and a color contrast map from original image pairs. Then, we use those maps to construct two new metrics ( $V-{\mathrm{ dispc}}$ and $V-{\mathrm{ dlogc}}$ ). Metric $V-{\mathrm{ dispc}}$ considers the effect of the disparity map and the color difference map, while $V-{\mathrm{ dlogc}}$ addresses the influence of the color contrast map. The prediction performance is evaluated using various types of stereoscopic crosstalk images. By incorporating $V-{\mathrm{ dispc}}$ and $V-{\mathrm{ dlogc}}$ , the new metric $V-{\mathrm{ pdlc}}$ is proposed to achieve a higher correlation with the perceived subject crosstalk scores. Experimental results show that the new metrics achieve better performance than previous methods, which indicate that color information is one key factor for crosstalk visible prediction. Furthermore, we construct a new data set to evaluate our new metrics.

Original languageEnglish
Pages (from-to)1158-1168
Number of pages11
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume28
Issue number5
DOIs
Publication statusPublished - May 2018

Keywords

  • Color contrast information
  • crosstalk perception
  • disparity map
  • objective metric

Fingerprint

Dive into the research topics of 'Robust Stereoscopic Crosstalk Prediction'. Together they form a unique fingerprint.

Cite this