Transporting Real Objects into Virtual and Augmented Environments

Catherine Taylor, Murray Evans, Darren Cosker

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

309 Downloads (Pure)

Abstract

Despite the growing interest in virtual and augmented reality (VR/AR), there are only a small number of limited approaches to transport a physical object into a virtual environment to be used within a VR or AR experience. An external sensor can be attached to an object to capture the 3D position and orientation but offers no information about the non-rigid behaviour of the object. On the other hand, sparse markers can be tracked to drive a rigged model. However, this approach is sensitive to changes in positions and occlusions and often involves costly non-standard hardware. To address these limitations, we propose an end-to-end pipeline for creating interactive virtual props from real-world physical objects. Within this pipeline we explore two methods for tracking our physical objects. The first is a multi-camera RGB system which tracks the 3D centroids of the coloured parts of an object, then uses a feed-forward neural network to infer deformations from these centroids. We also propose a single RGBD camera approach using VRProp-Net, a custom convolutional neural network, designed for tracking rigid and non-rigid objects in unlabelled RGB images. We find both approaches to have advantages and disadvantages. While frame-rates are similar, the multi-view system offers a larger tracking volume. On the other hand, the single camera approach is more portable, does not require calibration and more accurately predicts the deformation parameters.
Original languageEnglish
Title of host publicationACM Symposium on Computer Animation
Publication statusUnpublished - 2019

Fingerprint

Dive into the research topics of 'Transporting Real Objects into Virtual and Augmented Environments'. Together they form a unique fingerprint.

Cite this