VRProp-Net:Real-time Interaction with Virtual Props

Catherine Taylor, Robin McNicholas, Darren Cosker

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Downloads (Pure)

Abstract

Virtual and Augmented Reality (VR and AR) are two fast growing mediums, not only in the entertainment industry but also in health, education and engineering. A good VR or AR application seamlessly merges the real and virtual world, making the user feels fully immersed. Traditionally, a computer-generated object can be interacted with using controllers or hand gestures [HTC 2019; Microsoft 2019; Oculus 2019]. However, these motions can feel unnatural and do not accurately represent the motion of interacting with a real object. On the other hand, a physical object can be used to control the motion of a virtual object. At present, this can be done by tracking purely rigid motion using an external sensor [HTC 2019]. Alternatively, a sparse number of markers can be tracked, for example using a motion capture system, and the positions of these used to drive the motion of an underlying non-rigid model. However, this approach is sensitive to changes in marker position and occlusions and often involves costly non-standard hardware [Vicon 2019]. In addition, these approaches often require a virtual model to be manually sculpted and rigged which can be a time consuming process. Neural networks have been shown to be successful tools in computer vision, with several key methods using networks for tracking rigid and non-rigid motion in RGB images [Andrychowicz et al. 2018; Kanazawa et al. 2018; Pumarola et al. 2018]. While these methods show potential, they are limited to using multiple RGB cameras or large, costly amounts of labelled training data.
Original languageEnglish
Title of host publicationACM SIGGRAPH 2019 Posters, SIGGRAPH 2019
Subtitle of host publicationSIGGRAPH '19
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450363143
DOIs
Publication statusPublished - 28 Jul 2019

Publication series

NameACM SIGGRAPH 2019 Posters, SIGGRAPH 2019

Keywords

  • Neural Networks
  • Non-rigid Object Tracking
  • VR Props

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction

Cite this

Taylor, C., McNicholas, R., & Cosker, D. (2019). VRProp-Net:Real-time Interaction with Virtual Props. In ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019: SIGGRAPH '19 [31] (ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019). Association for Computing Machinery. https://doi.org/10.1145/3306214.3338548

VRProp-Net:Real-time Interaction with Virtual Props. / Taylor, Catherine; McNicholas, Robin; Cosker, Darren.

ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019: SIGGRAPH '19. Association for Computing Machinery, 2019. 31 (ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Taylor, C, McNicholas, R & Cosker, D 2019, VRProp-Net:Real-time Interaction with Virtual Props. in ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019: SIGGRAPH '19., 31, ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019, Association for Computing Machinery. https://doi.org/10.1145/3306214.3338548
Taylor C, McNicholas R, Cosker D. VRProp-Net:Real-time Interaction with Virtual Props. In ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019: SIGGRAPH '19. Association for Computing Machinery. 2019. 31. (ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019). https://doi.org/10.1145/3306214.3338548
Taylor, Catherine ; McNicholas, Robin ; Cosker, Darren. / VRProp-Net:Real-time Interaction with Virtual Props. ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019: SIGGRAPH '19. Association for Computing Machinery, 2019. (ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019).
@inproceedings{4e4e0af0ce5c4537811c007556c3a55d,
title = "VRProp-Net:Real-time Interaction with Virtual Props",
abstract = "Virtual and Augmented Reality (VR and AR) are two fast growing mediums, not only in the entertainment industry but also in health, education and engineering. A good VR or AR application seamlessly merges the real and virtual world, making the user feels fully immersed. Traditionally, a computer-generated object can be interacted with using controllers or hand gestures [HTC 2019; Microsoft 2019; Oculus 2019]. However, these motions can feel unnatural and do not accurately represent the motion of interacting with a real object. On the other hand, a physical object can be used to control the motion of a virtual object. At present, this can be done by tracking purely rigid motion using an external sensor [HTC 2019]. Alternatively, a sparse number of markers can be tracked, for example using a motion capture system, and the positions of these used to drive the motion of an underlying non-rigid model. However, this approach is sensitive to changes in marker position and occlusions and often involves costly non-standard hardware [Vicon 2019]. In addition, these approaches often require a virtual model to be manually sculpted and rigged which can be a time consuming process. Neural networks have been shown to be successful tools in computer vision, with several key methods using networks for tracking rigid and non-rigid motion in RGB images [Andrychowicz et al. 2018; Kanazawa et al. 2018; Pumarola et al. 2018]. While these methods show potential, they are limited to using multiple RGB cameras or large, costly amounts of labelled training data.",
keywords = "Neural Networks, Non-rigid Object Tracking, VR Props",
author = "Catherine Taylor and Robin McNicholas and Darren Cosker",
year = "2019",
month = "7",
day = "28",
doi = "10.1145/3306214.3338548",
language = "English",
series = "ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019",
publisher = "Association for Computing Machinery",
booktitle = "ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019",
address = "USA United States",

}

TY - GEN

T1 - VRProp-Net:Real-time Interaction with Virtual Props

AU - Taylor, Catherine

AU - McNicholas, Robin

AU - Cosker, Darren

PY - 2019/7/28

Y1 - 2019/7/28

N2 - Virtual and Augmented Reality (VR and AR) are two fast growing mediums, not only in the entertainment industry but also in health, education and engineering. A good VR or AR application seamlessly merges the real and virtual world, making the user feels fully immersed. Traditionally, a computer-generated object can be interacted with using controllers or hand gestures [HTC 2019; Microsoft 2019; Oculus 2019]. However, these motions can feel unnatural and do not accurately represent the motion of interacting with a real object. On the other hand, a physical object can be used to control the motion of a virtual object. At present, this can be done by tracking purely rigid motion using an external sensor [HTC 2019]. Alternatively, a sparse number of markers can be tracked, for example using a motion capture system, and the positions of these used to drive the motion of an underlying non-rigid model. However, this approach is sensitive to changes in marker position and occlusions and often involves costly non-standard hardware [Vicon 2019]. In addition, these approaches often require a virtual model to be manually sculpted and rigged which can be a time consuming process. Neural networks have been shown to be successful tools in computer vision, with several key methods using networks for tracking rigid and non-rigid motion in RGB images [Andrychowicz et al. 2018; Kanazawa et al. 2018; Pumarola et al. 2018]. While these methods show potential, they are limited to using multiple RGB cameras or large, costly amounts of labelled training data.

AB - Virtual and Augmented Reality (VR and AR) are two fast growing mediums, not only in the entertainment industry but also in health, education and engineering. A good VR or AR application seamlessly merges the real and virtual world, making the user feels fully immersed. Traditionally, a computer-generated object can be interacted with using controllers or hand gestures [HTC 2019; Microsoft 2019; Oculus 2019]. However, these motions can feel unnatural and do not accurately represent the motion of interacting with a real object. On the other hand, a physical object can be used to control the motion of a virtual object. At present, this can be done by tracking purely rigid motion using an external sensor [HTC 2019]. Alternatively, a sparse number of markers can be tracked, for example using a motion capture system, and the positions of these used to drive the motion of an underlying non-rigid model. However, this approach is sensitive to changes in marker position and occlusions and often involves costly non-standard hardware [Vicon 2019]. In addition, these approaches often require a virtual model to be manually sculpted and rigged which can be a time consuming process. Neural networks have been shown to be successful tools in computer vision, with several key methods using networks for tracking rigid and non-rigid motion in RGB images [Andrychowicz et al. 2018; Kanazawa et al. 2018; Pumarola et al. 2018]. While these methods show potential, they are limited to using multiple RGB cameras or large, costly amounts of labelled training data.

KW - Neural Networks

KW - Non-rigid Object Tracking

KW - VR Props

UR - http://www.scopus.com/inward/record.url?scp=85071269622&partnerID=8YFLogxK

U2 - 10.1145/3306214.3338548

DO - 10.1145/3306214.3338548

M3 - Conference contribution

T3 - ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019

BT - ACM SIGGRAPH 2019 Posters, SIGGRAPH 2019

PB - Association for Computing Machinery

ER -