Performance Driven Facial Animation with Blendshapes

  • Shridhar Ravikumar

Student thesis: Doctoral ThesisPhD

Abstract

In this thesis, we address some of the open challenges in the area of Performance Driven Facial Capture and Animation, specifically with the goal of improving the fidelity of the capture results and making both the Modeling and Capture stages of the animation pipeline robust, inexpensive, automated and consumer friendly. We present an overview of the process of facial animation and specifically Performance Driven Facial Animation, including the Modeling, Capture and Retargeting stages. We then discuss the existing literature in the area in detail and weigh the pros and cons of the various approaches that have been presented over the last few decades along with the differences between them. We then present, in detail, our contributions to the Modeling stage of the pipeline in the form of automating the generation of actor specific Blendshape Models from a single scan of the actor’s face or alternatively from a few images of the actor’s face resulting in a pipeline that is automated and inexpensive, while being inclusive of actor specific nuances. We then present our contributions in the form of our marker-based Capture pipeline that improves upon traditional marker-based systems by incorporating additional features in the form of makeup patterns which are used to train a FACS classifier that is integrated with our Blendshape weight optimization in a hybrid fashion. We show that this leads to improved results especially in areas that are otherwise challenging to capture with markers alone. We then discuss our contributions to the markerless Capture pipeline and present our approach to track an actor’s face with just a monocular RBG camera. We show that our method is able to achieve realistic results in spite of the missing information inherent in the monocular input by making use of static and dynamic prior information gleaned from existing animations from accurate 3D systems. We quantitatively evaluate our results comparing it with an approach using a monocular input without our spatial constraints and show that our results are closer to the ground-truth geometry. Finally, we present our results and conclusions and discuss future directions of research.
Date of Award26 Mar 2018
Original languageEnglish
Awarding Institution
  • University of Bath
SupervisorDarren Cosker (Supervisor)

Keywords

  • Performance Driven Facial Animation
  • Blendshapes
  • Facial Animation
  • Face
  • animation

Cite this

'