Prediction of Impulsive Aggression Based on Video Images

Borui Zhang, Liquan Dong*, Lingqin Kong, Ming Liu, Yuejin Zhao, Mei Hui, Xuhong Chu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In response to the subjectivity, low accuracy, and high concealment of existing attack behavior prediction methods, a video-based impulsive aggression prediction method that integrates physiological parameters and facial expression information is proposed. This method uses imaging equipment to capture video and facial expression information containing the subject’s face and uses imaging photoplethysmography (IPPG) technology to obtain the subject’s heart rate variability parameters. Meanwhile, the ResNet-34 expression recognition model was constructed to obtain the subject’s facial expression information. Based on the random forest classification model, the physiological parameters and facial expression information obtained are used to predict individual impulsive aggression. Finally, an impulsive aggression induction experiment was designed to verify the method. The experimental results show that the accuracy of this method for predicting the presence or absence of impulsive aggression was 89.39%. This method proves the feasibility of applying physiological parameters and facial expression information to predict impulsive aggression. This article has important theoretical and practical value for exploring new impulsive aggression prediction methods. It also has significance in safety monitoring in special and public places such as prisons and rehabilitation centers.

Original languageEnglish
Article number942
JournalBioengineering
Volume10
Issue number8
DOIs
Publication statusPublished - Aug 2023

Keywords

  • facial expression
  • heart rate variability
  • imaging photoplethysmography technology
  • impulsive aggression

Fingerprint

Dive into the research topics of 'Prediction of Impulsive Aggression Based on Video Images'. Together they form a unique fingerprint.

Cite this