Prompt-guided Precise Audio Editing with Diffusion Models

Manjie Xu, Chenxing Li*, Duzhen Zhang, Dan Su, Wei Liang*, Dong Yu*

*此作品的通讯作者

科研成果: 期刊稿件会议文章同行评审

摘要

Audio editing involves the arbitrary manipulation of audio content through precise control. Although text-guided diffusion models have made significant advancements in text-to-audio generation, they still face challenges in finding a flexible and precise way to modify target events within an audio track. We present a novel approach, referred to as Prompt-guided Precise Audio Editing (PPAE), which serves as a general module for diffusion models and enables precise audio editing. The editing is based on the input textual prompt only and is entirely training-free. We exploit the cross-attention maps of diffusion models to facilitate accurate local editing and employ a hierarchical local-global pipeline to ensure a smoother editing process. Experimental results highlight the effectiveness of our method in various editing tasks.

源语言英语
页(从-至)55126-55143
页数18
期刊Proceedings of Machine Learning Research
235
出版状态已出版 - 2024
活动41st International Conference on Machine Learning, ICML 2024 - Vienna, 奥地利
期限: 21 7月 202427 7月 2024

指纹

探究 'Prompt-guided Precise Audio Editing with Diffusion Models' 的科研主题。它们共同构成独一无二的指纹。

引用此