Virtual agent positioning driven by scene semantics in mixed reality

Vining Lang, Wei Liang, Lap Fai Yu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

24 Citations (Scopus)

Abstract

When a user interacts with a virtual agent via a mixed reality device, such as a Hololens or a Magic Leap headset, it is important to consider the semantics of the real-world scene in positioning the virtual agent, so that it interacts with the user and the objects in the real world naturally. Mixed reality aims to blend the virtual world with the real world seamlessly. In line with this goal, in this paper, we propose a novel approach to use scene semantics to guide the positioning of a virtual agent. Such considerations can avoid unnatural interaction experiences, e.g., interacting with a virtual human floating in the air. To obtain the semantics of a scene, we first reconstruct the 3D model of the scene by using the RGB-D cameras mounted on the mixed reality device (e.g., a Hololens). Then, we employ the Mask R-CNN object detector to detect objects relevant to the interactions within the scene context. To evaluate the positions and orientations for placing a virtual agent in the scene, we define a cost function based on the scene semantics, which comprises a visibility term and a spatial term. We then apply a Markov chain Monte Carlo optimization technique to search for an optimized solution for placing the virtual agent. We carried out user study experiments to evaluate the results generated by our approach. The results show that our approach achieved a higher user evaluation score than that of the alternative approaches.

Original languageEnglish
Title of host publication26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages767-775
Number of pages9
ISBN (Electronic)9781728113777
DOIs
Publication statusPublished - Mar 2019
Event26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Osaka, Japan
Duration: 23 Mar 201927 Mar 2019

Publication series

Name26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings

Conference

Conference26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019
Country/TerritoryJapan
CityOsaka
Period23/03/1927/03/19

Keywords

  • Mixed reality
  • Scene understanding
  • Virtual agent positioning

Fingerprint

Dive into the research topics of 'Virtual agent positioning driven by scene semantics in mixed reality'. Together they form a unique fingerprint.

Cite this