Scene-context-aware indoor object selection and movement in VR

Miao Wang, Zi-Ming Ye, Jin-Chuan Shi, Yongliang Yang

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

5 Citations (SciVal)
180 Downloads (Pure)

Abstract

Virtual reality (VR) applications such as interior design typically require accurate and efficient selection and movement of indoor objects. In this paper, we present an indoor object selection and movement approach by taking into account scene contexts such as object semantics and interrelations. This provides more intelligence and guidance to the interaction, and greatly enhances user experience. We evaluate our proposals by comparing them with traditional approaches in different interaction modes based on controller, head pose, and eye gaze. Extensive user studies on a variety of selection and movement tasks are conducted to validate the advantages of our approach. We demonstrate our findings via a furniture arrangement application.

Original languageEnglish
Title of host publication2021 IEEE Virtual Reality and 3D User Interfaces (VR)
PublisherIEEE
Pages235-244
Number of pages10
Volume2021
ISBN (Electronic)9780738125565
DOIs
Publication statusPublished - 10 May 2021
Event28th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021 - Virtual, Lisboa, Portugal
Duration: 27 Mar 20213 Apr 2021

Publication series

NameProceedings - 2021 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021
PublisherIEEE
ISSN (Print)2642-5256
ISSN (Electronic)2642-5254

Conference

Conference28th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2021
Country/TerritoryPortugal
CityVirtual, Lisboa
Period27/03/213/04/21

Keywords

  • Human-centered computing-Human computer interaction (HCI)-Interaction techniques-Pointing

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Media Technology

Fingerprint

Dive into the research topics of 'Scene-context-aware indoor object selection and movement in VR'. Together they form a unique fingerprint.

Cite this