Video quality assessment using space–time slice mappings

Lixiong Liu, Tianshu Wang, Hua Huang*, Alan Conrad Bovik

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

We develop a full-reference (FR) video quality assessment framework that integrates analysis of space–time slices (STSs) with frame-based image quality measurement (IQA) to form a high-performance video quality predictor. The approach first arranges the reference and test video sequences into a space–time slice representation. To more comprehensively characterize space–time distortions, a collection of distortion-aware maps are computed on each reference–test video pair. These reference-distorted maps are then processed using a standard image quality model, such as peak signal-to-noise ratio (PSNR) or Structural Similarity (SSIM). A simple learned pooling strategy is used to combine the multiple IQA outputs to generate a final video quality score. This leads to an algorithm called Space–TimeSlice PSNR (STS-PSNR), which we thoroughly tested on three publicly available video quality assessment databases and found it to deliver significantly elevated performance relative to state-of-the-art video quality models. Source code for STS-PSNR is freely available at: http://live.ece.utexas.edu/research/Quality/STS-PSNR_release.zip.

Original languageEnglish
Article number115749
JournalSignal Processing: Image Communication
Volume82
DOIs
Publication statusPublished - Mar 2020

Keywords

  • Image quality assessment
  • Learning based pooling
  • Space–time stability
  • Spatial temporal slice
  • Video quality assessment

Fingerprint

Dive into the research topics of 'Video quality assessment using space–time slice mappings'. Together they form a unique fingerprint.

Cite this