Learning Multimodal Representations for Drowsiness Detection

Kun Qian*, Tomoya Koike, Toru Nakamura, Bjorn W. Schuller, Yoshiharu Yamamoto

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

16 引用 (Scopus)

摘要

Drowsiness detection is a crucial step for safe driving. A plethora of efforts has been invested on using pervasive sensor data (e.g., video, physiology) empowered by machine learning to build an automatic drowsiness detection system. Nevertheless, most of the existing methods are based on complicated wearables (e.g., electroencephalogram) or computer vision algorithms (e.g., eye state analysis), which makes the relevant systems hardly applicable in the wild. Furthermore, data based on these methods are insufficient in nature due to limited simulation experiments. In this light, we propose a novel and easily implemented method based on full non-invasive multimodal machine learning analysis for the driver drowsiness detection task. The drowsiness level was estimated by self-reported questionnaire in pre-designed protocols. First, we consider involving environmental data (e.g., temperature, humidity, illuminance, and further more), which can be regarded as complementary information for the human activity data recorded via accelerometers or actigraphs. Second, we demonstrate that the models trained by daily life data can still be efficient to make predictions for the subject performing in a simulator, which may benefit the future data collection methods. Finally, we make a comprehensive study on investigating different machine learning methods including classic 'shallow' models and recent deep models. Experimental results show that, our proposed methods can reach 64.6% unweighted average recall for drowsiness detection in a subject-independent scenario.

源语言英语
页(从-至)11539-11548
页数10
期刊IEEE Transactions on Intelligent Transportation Systems
23
8
DOI
出版状态已出版 - 1 8月 2022

指纹

探究 'Learning Multimodal Representations for Drowsiness Detection' 的科研主题。它们共同构成独一无二的指纹。

引用此