Audio to Deep Visual: Speaking Mouth Generation Based on 3D Sparse Landmarks

Hui Fang, Dongdong Weng*, Zeyu Tian, Zhen Song*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Having a system to automatically generate a talking mouth in sync with input speech would enhance speech communication and enable many novel applications. This article presents a new model that can generate 3D talking mouth landmarks from Chinese speech. We use sparse 3D landmarks to model the mouth motion, which are easy to capture and provide sufficient lip accuracy. The 4D mouth motion dataset was collected by our self-developed facial capture device, filling the gap in the Chinese speech-driven lip dataset. The exper-imental results show that the generated talking landmarks achieve accurate, smooth, and natural 3D mouth movements.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages605-606
Number of pages2
ISBN (Electronic)9798350348392
DOIs
Publication statusPublished - 2023
Event2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023 - Shanghai, China
Duration: 25 Mar 202329 Mar 2023

Publication series

NameProceedings - 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023

Conference

Conference2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2023
Country/TerritoryChina
CityShanghai
Period25/03/2329/03/23

Keywords

  • Applications
  • Artificial intelligence
  • Computer graphics
  • Computing methodologie
  • Computing methodologies
  • Natural language processing

Fingerprint

Dive into the research topics of 'Audio to Deep Visual: Speaking Mouth Generation Based on 3D Sparse Landmarks'. Together they form a unique fingerprint.

Cite this