FakeForward: Using Deepfake Technology for Feedforward Learning

Christopher Clarke, Jingnan Xu, Ye Zhu, Karan Dharamshi, Harry McGill, Stephen Black, Christof Lutteroth

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

9 Citations (SciVal)
298 Downloads (Pure)

Abstract

Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward – a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.
Original languageEnglish
Title of host publicationCHI 2023 - Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Subtitle of host publicationProceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Place of PublicationNew York, U. S. A.
Pages1-17
Number of pages17
ISBN (Electronic)9781450394215
DOIs
Publication statusPublished - 19 Apr 2023
Event2023 CHI Conference on Human Factors in Computing Systems, CHI 2023 - Hamburg, Germany
Duration: 23 Apr 202328 Apr 2023

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
Country/TerritoryGermany
CityHamburg
Period23/04/2328/04/23

Bibliographical note

Funding Information:
This work was supported and partly funded by the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA 2.0; EP/T022523/1) at the University of Bath.

Publisher Copyright:
© 2023 ACM.

Keywords

  • Deepfake
  • Feedforward
  • Fitness
  • Physical Exercise
  • Public Speaking
  • Skill Acquisition
  • Training
  • Videos

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'FakeForward: Using Deepfake Technology for Feedforward Learning'. Together they form a unique fingerprint.

Cite this