Recognising human interaction from videos by a discriminative model

Yu Kong, Wei Liang*, Zhen Dong, Yunde Jia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Citations (Scopus)

Abstract

This study addresses the problem of recognising human interactions between two people. The main difficulties lie in the partial occlusion of body parts and the motion ambiguity in interactions. The authors observed that the interdependencies existing at both the action level and the body part level can greatly help disambiguate similar individual movements and facilitate human interaction recognition. Accordingly, they proposed a novel discriminative method, which model the action of each person by a large-scale global feature and local body part features, to capture such interdependencies for recognising interaction of two people. A variant of multi-class Adaboost method is proposed to automatically discover class-specific discriminative three-dimensional body parts. The proposed approach is tested on the authors newly introduced BIT-interaction dataset and the UT-interaction dataset. The results show that their proposed model is quite effective in recognising human interactions.

Original languageEnglish
Pages (from-to)277-286
Number of pages10
JournalIET Computer Vision
Volume8
Issue number4
DOIs
Publication statusPublished - 1 Aug 2014

Fingerprint

Dive into the research topics of 'Recognising human interaction from videos by a discriminative model'. Together they form a unique fingerprint.

Cite this