Teaching machines on snoring: A benchmark on computer audition for snore sound excitation localisation

Kun Qian*, Christoph Janott, Zixing Zhang, Jun Deng, Alice Baird, Clemens Heiser, Winfried Hohenhorst, Michael Herzog, Werner Hemmert, Björn Schuller

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

This paper proposes a comprehensive study on machine listening for localisation of snore sound excitation. Here we investigate the effects of varied frame sizes, and overlap of the analysed audio chunk for extracting low-level descriptors. In addition, we explore the performance of each kind of feature when it is fed into varied classifier models, including support vector machines, k-nearest neighbours, linear discriminant analysis, random forests, extreme learning machines, kernel-based extreme learning machines, multilayer perceptrons, and deep neural networks. Experimental results demonstrate that, wavelet packet transform energy can outperform most other features. A deep neural network trained with subband energy ratios reaches the highest performance achieving an unweighted average recall of 72.8% from four types for snoring.

Original languageEnglish
Pages (from-to)465-475
Number of pages11
JournalArchives of Acoustics
Volume43
Issue number3
DOIs
Publication statusPublished - 2018
Externally publishedYes

Keywords

  • Acoustic features
  • Machine learning
  • Obstructive sleep apnea
  • Snore sound

Fingerprint

Dive into the research topics of 'Teaching machines on snoring: A benchmark on computer audition for snore sound excitation localisation'. Together they form a unique fingerprint.

Cite this