Virtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a ‘learning-by-doing’ active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with ‘hands-on’ practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.
Original languageEnglish
Title of host publicationCHI 2024 - Proceedings of the 2024 CHI Conference on Human Factors in Computing Sytems
Subtitle of host publicationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems
EditorsFlorian Floyd Mueller, Penny Kyburz, Julie R. Williamson, Corina Sas, Max L. Wilson, Phoebe Toups Dugas, Irina Shklovski
Place of PublicationNew York, U. S. A.
Number of pages19
ISBN (Electronic)9798400703300
Publication statusPublished - 11 May 2024
EventCHI '24 : CHI Conference on Human Factors in Computing Systems - Honolulu, USA United States
Duration: 11 May 202416 May 2024

Publication series

NameConference on Human Factors in Computing Systems - Proceedings


ConferenceCHI '24 : CHI Conference on Human Factors in Computing Systems
Country/TerritoryUSA United States

Cite this