Context-Aware Head-and-Eye Motion Generation with Diffusion Model

Yuxin Shen, Manjie Xu, Wei Liang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In humanity's ongoing quest to craft natural and realistic avatars within virtual environments, the generation of authentic eye gaze behaviors stands paramount. Eye gaze not only serves as a primary non-verbal communication cue, but it also reflects cognitive processes, intent, and attentiveness, making it a crucial element in ensuring immersive interactions. However, automatically generating these intricate gaze behaviors presents significant challenges. Traditional methods can be both time-consuming and lack the precision to align gaze behaviors with the intricate nuances of the environment in which the avatar resides. To overcome these challenges, we introduce a novel two-stage approach to generate context-aware head-and-eye motions across diverse scenes. By harnessing the capabilities of advanced diffusion models, our approach adeptly produces contextually appropriate eye gaze points, further leading to the generation of natural head-and-eye movements. Utilizing Head-Mounted Display (HMD) eye-tracking technology, we also present a comprehensive dataset, which captures human eye gaze behaviors in tandem with associated scene features. We show that our approach consistently delivers intuitive and lifelike head-and-eye motions and demonstrates superior performance in terms of motion fluidity, alignment with contextual cues, and overall user satisfaction.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages157-167
Number of pages11
ISBN (Electronic)9798350374025
DOIs
Publication statusPublished - 2024
Event31st IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024 - Orlando, United States
Duration: 16 Mar 202421 Mar 2024

Publication series

NameProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024

Conference

Conference31st IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024
Country/TerritoryUnited States
CityOrlando
Period16/03/2421/03/24

Keywords

  • Human computer interaction (HCI)
  • Human-centered computing
  • Interaction paradigms
  • Virtual reality

Fingerprint

Dive into the research topics of 'Context-Aware Head-and-Eye Motion Generation with Diffusion Model'. Together they form a unique fingerprint.

Cite this