TY - GEN
T1 - Multi-level attention model with deep scattering spectrum for acoustic scene classification
AU - Li, Zhitong
AU - Hou, Yuanbo
AU - Xie, Xiang
AU - Li, Shengchen
AU - Zhang, Liqiang
AU - Du, Shixuan
AU - Liu, Wei
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Acoustic scene classification (ASC) refers to the classification of audio into one of predefined classes that characterize the environment. People are used to combine log-mel filterbank features with convolutional neural network (CNN) to build ASC system. In this paper, we explore the use of deep scattering spectrum (DSS) features combined with a multi-level attention model based on CNN for ASC tasks. First, the time scatter and frequency scatter coefficients of DSS with different resolutions are explored as ASC features. Second, we incorporate a multi-level attention model into CNN to build the classification system. We then evaluate the proposed approach on the IEEE challenge of detection and classification of acoustic scenes and events 2018 (DCASE 2018) dataset. Results show that the DSS features provide between a 11%-14% relative improvement in accuracy over log-mel features, within a state-of-the-art framework. The application of multilevel attention model on CNN can improve the accuracy by nearly 5%. The highest accuracy of our proposed system is 78.3% on the development set.
AB - Acoustic scene classification (ASC) refers to the classification of audio into one of predefined classes that characterize the environment. People are used to combine log-mel filterbank features with convolutional neural network (CNN) to build ASC system. In this paper, we explore the use of deep scattering spectrum (DSS) features combined with a multi-level attention model based on CNN for ASC tasks. First, the time scatter and frequency scatter coefficients of DSS with different resolutions are explored as ASC features. Second, we incorporate a multi-level attention model into CNN to build the classification system. We then evaluate the proposed approach on the IEEE challenge of detection and classification of acoustic scenes and events 2018 (DCASE 2018) dataset. Results show that the DSS features provide between a 11%-14% relative improvement in accuracy over log-mel features, within a state-of-the-art framework. The application of multilevel attention model on CNN can improve the accuracy by nearly 5%. The highest accuracy of our proposed system is 78.3% on the development set.
KW - Acoustic scene classification
KW - DCASE 2018
KW - Deep scattering spectrum
KW - Multi-level attention mechanism
UR - http://www.scopus.com/inward/record.url?scp=85071455855&partnerID=8YFLogxK
U2 - 10.1109/ICMEW.2019.00074
DO - 10.1109/ICMEW.2019.00074
M3 - Conference contribution
AN - SCOPUS:85071455855
T3 - Proceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
SP - 396
EP - 401
BT - Proceedings - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2019
Y2 - 8 July 2019 through 12 July 2019
ER -