Abstract

Virtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a ‘learning-by-doing’ active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with ‘hands-on’ practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.
Original languageEnglish
Title of host publicationCHI 2024 - Proceedings of the 2024 CHI Conference on Human Factors in Computing Sytems
Subtitle of host publicationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems
EditorsFlorian Floyd Mueller, Penny Kyburz, Julie R. Williamson, Corina Sas, Max L. Wilson, Phoebe Toups Dugas, Irina Shklovski
Place of PublicationNew York, U. S. A.
Number of pages19
ISBN (Electronic)9798400703300
DOIs
Publication statusPublished - 11 May 2024
EventCHI '24 : CHI Conference on Human Factors in Computing Systems - Honolulu, USA United States
Duration: 11 May 202416 May 2024

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

ConferenceCHI '24 : CHI Conference on Human Factors in Computing Systems
Country/TerritoryUSA United States
CityHonolulu
Period11/05/2416/05/24

Funding

Isabel Fitton’s research is funded by the UKRI EPSRC Centre for Doctoral Training in Digital Entertainment (CDE), EP/L016540/1 and industrial partner PwC. This work was also supported and partly funded by the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA 2.0; EP/T022523/1) at the University of Bath. We thank Dr Lee Moore for his help in designing the study methodology. We thank Jed Ionov-Flint, Norika Kozato, Natasha David, and Charlotte Silverton for their assistance in creating the avatars for this study. We also thank Alvaro Farratto Santos for his role in producing the 3D-printed puzzles.

FundersFunder number
University of Bath
Centre for the Analysis of Motion
Entertainment Research and Applications
UKRI EPSRCEP/L016540/1
CAMERAEP/T022523/1

    Fingerprint

    Dive into the research topics of 'Watch this! Observational Learning in VR Promotes Beter Far Transfer than Active Learning for a Fine Psychomotor Task'. Together they form a unique fingerprint.

    Cite this