Environmental Sound Recognition Based on Residual Network and Stacking Algorithm

Haoyuan Wang, Xuemei Ren*, Zhen Zhao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Environmental sound recognition is one of the important tasks in the field of audio research. Because the environment is complex and there is a lot of useless sound information, the traditional methods have low recognition accuracy, which is gradually replaced by related methods of deep learning. In this paper, combined with the latest research in this field, the recognition algorithm based on residual network and stacking method is proposed. The whole is divided into two parts: a feature extractor and a classifier. The residual network is responsible for extracting features with high recognition rate and the stacking algorithm is responsible for accurate recognition. The method is applied to the representative datasets ESC-50 and UrbanSound8k. We obtain a higher accuracy and the model is more clear and simple.

Original languageEnglish
Title of host publicationProceedings of 2020 Chinese Intelligent Systems Conference - Volume II
EditorsYingmin Jia, Weicun Zhang, Yongling Fu
PublisherSpringer Science and Business Media Deutschland GmbH
Pages682-690
Number of pages9
ISBN (Print)9789811584572
DOIs
Publication statusPublished - 2021
EventChinese Intelligent Systems Conference, CISC 2020 - Shenzhen, China
Duration: 24 Oct 202025 Oct 2020

Publication series

NameLecture Notes in Electrical Engineering
Volume706 LNEE
ISSN (Print)1876-1100
ISSN (Electronic)1876-1119

Conference

ConferenceChinese Intelligent Systems Conference, CISC 2020
Country/TerritoryChina
CityShenzhen
Period24/10/2025/10/20

Keywords

  • Environment sound recognition
  • MFCC
  • Residual network
  • Stacking algorithm

Fingerprint

Dive into the research topics of 'Environmental Sound Recognition Based on Residual Network and Stacking Algorithm'. Together they form a unique fingerprint.

Cite this