Refining kernel matching pursuit

Jianwu Li*, Yao Lu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Kernel matching pursuit (KMP), as a greedy machine learning algorithm, appends iteratively functions from a kernel-based dictionary to its solution. An obvious problem is that all kernel functions in dictionary will keep unchanged during the whole process of appending. It is difficult, however, to determine the optimal dictionary of kernel functions ahead of training, without enough prior knowledge. This paper proposes to further refine the results obtained by KMP, through adjusting all parameters simultaneously in the solutions. Three optimization methods including gradient descent (GD), simulated annealing (SA), and particle swarm optimization (PSO), are used to perform the refining procedure. Their performances are also analyzed and evaluated, according to experimental results based on UCI benchmark datasets.

Original languageEnglish
Title of host publicationAdvances in Neural Networks - ISNN 2010 - 7th International Symposium on Neural Networks, ISNN 2010, Proceedings
Pages25-32
Number of pages8
EditionPART 2
DOIs
Publication statusPublished - 2010
Event7th International Symposium on Neural Networks, ISNN 2010 - Shanghai, China
Duration: 6 Jun 20109 Jun 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume6064 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference7th International Symposium on Neural Networks, ISNN 2010
Country/TerritoryChina
CityShanghai
Period6/06/109/06/10

Keywords

  • Gradient descent
  • Kernel matching pursuit
  • Particle swarm optimization
  • Simulated annealing

Fingerprint

Dive into the research topics of 'Refining kernel matching pursuit'. Together they form a unique fingerprint.

Cite this