A wavelet image coding algorithm based on human visual system characteristics

Li Xiong Liu*, Wei Wei Wang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

In this paper, a novel wavelet image coding algorithm based on human visual system characteristics is proposed. After dividing the original image into sub-blocks, including smooth block, edge block and texture block, different perceptual weights and compression algorithms are applied to each sub-block. Optimal coding of the whole image is achieved by choosing perceptual weights and compression algorithms according to the human visual system characteristics of each sub-block. An innovated image coding process, in which different signal transforms are combined together to provide an efficient and effective image representation is put forward here. Compared with traditional single wavelet image coding algorithm, substantially improvements of visual quality of reconstructed image and coding performance speed are shown in experimental results.

Original languageEnglish
Title of host publicationProceedings of the 2008 International Conference on Wavelet Analysis and Pattern Recognition, ICWAPR
Pages113-117
Number of pages5
DOIs
Publication statusPublished - 2008
Event2008 International Conference on Wavelet Analysis and Pattern Recognition, ICWAPR - Hong Kong, China
Duration: 30 Aug 200831 Aug 2008

Publication series

NameProceedings of the 2008 International Conference on Wavelet Analysis and Pattern Recognition, ICWAPR
Volume1

Conference

Conference2008 International Conference on Wavelet Analysis and Pattern Recognition, ICWAPR
Country/TerritoryChina
CityHong Kong
Period30/08/0831/08/08

Keywords

  • Human visual system
  • Image coding
  • SPECK
  • SPIHT
  • Wavelet transform

Fingerprint

Dive into the research topics of 'A wavelet image coding algorithm based on human visual system characteristics'. Together they form a unique fingerprint.

Cite this