Daily Mental Health Monitoring from Speech: A Real-World Japanese Dataset and Multitask Learning Analysis

Meishu Song, Andreas Triantafyllopoulos, Zijiang Yang, Hiroki Takeuchi, Toru Nakamura, Akifumi Kishi, Tetsuro Ishizawa, Kazuhiro Yoshiuchi, Xin Jing, Vincent Karas, Zhonghao Zhao, Kun Qian, Bin Hu, Bjorn W. Schuller, Yoshiharu Yamamoto*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

4 Citations (Scopus)

Abstract

Translating mental health recognition from clinical research into real-world application requires extensive data, yet existing emotion datasets are impoverished in terms of daily mental health monitoring, especially when aiming for self-reported anxiety and depression recognition. We introduce the Japanese Daily Speech Dataset (JDSD), a large in-the-wild daily speech emotion dataset consisting of 20,827 speech samples from 342 speakers and 54 hours of total duration. The data is annotated on the Depression and Anxiety Mood Scale (DAMS) - 9 self-reported emotions to evaluate mood state including "vigorous", "gloomy", "concerned", "happy", "unpleasant", "anxious", "cheerful", "depressed", and "worried". Our dataset possesses emotional states, activity, and time diversity, making it useful for training models to track daily emotional states for healthcare purposes. We partition our corpus and provide a multi-task benchmark across nine emotions, demonstrating that mental health states can be predicted reliably from self-reports with a Concordance Correlation Coefficient value of.547 on average. We hope that JDSD will become a valuable resource to further the development of daily emotional healthcare tracking.

Keywords

  • Daily Speech
  • Mental Health
  • Multitask Learning
  • Speech Emotion Recognition

Fingerprint

Dive into the research topics of 'Daily Mental Health Monitoring from Speech: A Real-World Japanese Dataset and Multitask Learning Analysis'. Together they form a unique fingerprint.

Cite this