TY - GEN
T1 - Object category recognition using boosting tree with heterogeneous features
AU - Lin, Liang
AU - Xiong, Caiming
AU - Liu, Yue
AU - Wang, Yongtian
PY - 2007
Y1 - 2007
N2 - The problem of object category recognition has long challenged the computer vision community. In this paper, we address these tasks via learning two-class and multi-class discriminative models. The proposed approach integrates the Adaboost algorithm into the decision tree structure, called DB-Tree, and each tree node combines a number of weak classifiers into a strong classifier (a conditional posterior probability). In the learning stage, each boosted classifier in a tree node is trained to split the training set to left and right sub-trees, and the classifier is thus used not to return the class of the sample but rather to assign the sample to the left or right sub-tree. Therefore, the DB-Tree can be built up automatically and recursively. In the testing stage, the posterior probability of each node is computed by the weighted conditional probability of left and right sub-trees. Thus, the top node of the tree can output the overall posterior probability. In addition, the multi-class and two-class learning procedures become unified, through treating the multi-class classification problem as a special two-class classification problem, and either a positive or negative label is assigned to each class in minimizing the total entropy in each node.
AB - The problem of object category recognition has long challenged the computer vision community. In this paper, we address these tasks via learning two-class and multi-class discriminative models. The proposed approach integrates the Adaboost algorithm into the decision tree structure, called DB-Tree, and each tree node combines a number of weak classifiers into a strong classifier (a conditional posterior probability). In the learning stage, each boosted classifier in a tree node is trained to split the training set to left and right sub-trees, and the classifier is thus used not to return the class of the sample but rather to assign the sample to the left or right sub-tree. Therefore, the DB-Tree can be built up automatically and recursively. In the testing stage, the posterior probability of each node is computed by the weighted conditional probability of left and right sub-trees. Thus, the top node of the tree can output the overall posterior probability. In addition, the multi-class and two-class learning procedures become unified, through treating the multi-class classification problem as a special two-class classification problem, and either a positive or negative label is assigned to each class in minimizing the total entropy in each node.
KW - Boosting tree
KW - Discriminative model
KW - Object recognition
UR - http://www.scopus.com/inward/record.url?scp=42549164212&partnerID=8YFLogxK
U2 - 10.1117/12.749921
DO - 10.1117/12.749921
M3 - Conference contribution
AN - SCOPUS:42549164212
SN - 9780819469526
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - MIPPR 2007
T2 - MIPPR 2007: Pattern Recognition and Computer Vision
Y2 - 15 November 2007 through 17 November 2007
ER -