Variable support-weight approach for correspondence search based on modified census transform

Huajian Zhu*, Junzheng Wang, Jing Li

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

The computation of stereo depth is a very important field of computer vision. Aiming at solving the problem of low accuracy of traditional Census-based stereo matching algorithm, a variable support-weight approach for visual correspondence search based on modified Census transform is proposed in this paper. On the basis of analyzing defects of the traditional Census transform, a modified Census transform algorithm using average value of minimum evenness sub-area as a reference instead of the center pixel intensity as a reference is raised which enhances robustness of the algorithm. The matching accuracy is improved by weighting the average value and the standard deviation of Hamming distances in a block. The experiment results indicate that the proposed approach works better than traditional ones. Accurate disparities can be obtained even in the depth discontinuities regions.

Original languageEnglish
Title of host publicationICSP 2012 - 2012 11th International Conference on Signal Processing, Proceedings
Pages972-975
Number of pages4
DOIs
Publication statusPublished - 2012
Event2012 11th International Conference on Signal Processing, ICSP 2012 - Beijing, China
Duration: 21 Oct 201225 Oct 2012

Publication series

NameInternational Conference on Signal Processing Proceedings, ICSP
Volume2

Conference

Conference2012 11th International Conference on Signal Processing, ICSP 2012
Country/TerritoryChina
CityBeijing
Period21/10/1225/10/12

Keywords

  • minimum evenness sub-area
  • modified Census transform
  • stereo matching
  • variable support-weight

Fingerprint

Dive into the research topics of 'Variable support-weight approach for correspondence search based on modified census transform'. Together they form a unique fingerprint.

Cite this