Dog Code: Human to Quadruped Embodiment using Shared Codebooks

Donal Egan, Alberto Jovane, Jan Szkaradek, George Fletcher, Darren Cosker, Rachel McDonnell

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

Abstract

Many VR animal embodiment sytsems suffer from poor animation fidelity, typically animating the animal avatars using inverse kinematics. We address this issue, presenting a novel deep-learning method, centred around a shared codebook, for mapping human motion to quadruped motion. Rather than trying to directly bridge the gap from human motion to quadruped motion, a task which has proven difficult, we first use a rule-based retargeter, relying on inverse and forward kinematics, to retarget human motions to an intermediate motion domain in which the motions share the same skeleton as the quadruped. We then use finite scalar quantization to construct a shared latent space, or codebook, between this intermediate domain and the quadruped motion domain. We do this by first pre-defining a finite number of discrete latent codes and then teaching these codes, using unsupervised deep-learning, to represent semantically similar motions in the two domains. We incorporate our real-time human-to-quadruped motion mapping into a VR quadruped embodiment system. The output quadruped animations are natural and realistic, while also preserving the semantics of users' actions. Moreover, there is a strong synchrony between the input human motions and retargeted quadruped motions, an important factor for inducing a strong sense of VR embodiment.

Original languageEnglish
Title of host publicationProceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games
EditorsStephen N. Spencer
Place of PublicationU. S. A.
PublisherAssociation for Computing Machinery
Pages1-11
ISBN (Electronic)9798400710902
DOIs
Publication statusPublished - 21 Nov 2024
Event17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024 - Arlington, USA United States
Duration: 21 Nov 202423 Nov 2024

Publication series

NameProceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games

Conference

Conference17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024
Country/TerritoryUSA United States
CityArlington
Period21/11/2423/11/24

Keywords

  • Deep-learning
  • Motion retargeting
  • Quadruped embodiment
  • VR embodiment

ASJC Scopus subject areas

  • Computer Science Applications
  • Human-Computer Interaction
  • Education

Fingerprint

Dive into the research topics of 'Dog Code: Human to Quadruped Embodiment using Shared Codebooks'. Together they form a unique fingerprint.

Cite this