Towards automatic performance-driven animation between multiple types of facial model

Darren Cosker, R. Borkett, David Marshall, Paul L. Rosin

Research output: Contribution to journalArticlepeer-review

8 Citations (SciVal)


The authors describe a method to re-map animation parameters between multiple types of facial model for performance-driven animation. A facial performance can be analysed in terms of a set of facial action parameter trajectories using a modified appearance model with modes of variation encoding specific facial actions which can be pre-defined. These parameters can then be used to animate other modified appearance models or 3D morph-target-based facial models. Thus, the animation parameters analysed from the video performance may be re-used to animate multiple types of facial model. The authors demonstrate the effectiveness of the proposed approach by measuring its ability to successfully extract action parameters from performances and by displaying frames from example animations. The authors also demonstrate its potential use in fully automatic performance-driven animation applications.
Original languageEnglish
Pages (from-to)129-141
Number of pages13
JournalIET Computer Vision
Issue number3
Publication statusPublished - Sept 2008


Dive into the research topics of 'Towards automatic performance-driven animation between multiple types of facial model'. Together they form a unique fingerprint.

Cite this