Automatic audio driven animation of non-verbal actions

D. Cosker, C. Holt, D. Mason, G. Whatling, D. Marshall, P.L. Rosin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

While speech driven animation for lip-synching and facial expression synthesis from speech has previously received much attention, there is no previous work on generating non-verbal actions such as laughing and crying automatically from an audio signal. In this article initial results on a system designed to address this issue are presented. 3D facial data is recorded for a participant making different actions-i.e. laughing, crying, yawning and sneezing-using a Qualysis (Sweden) optical motion-capture system while simultaneously recording audio data. 30 retro-reflective markers were placed on the participant's face to capture movement. Using this data, an analysis and synthesis machine was then trained consisting of a dual-input Hidden Markov Model (HMM) and a trellis search algorithm which converts HMM visual states and new input audio into new 3D motion-capture data.
Original languageEnglish
Title of host publicationIET 4th European Conference on Visual Media Production (CVMP 2007)
PublisherIET
Pages16
DOIs
Publication statusPublished - 2007
EventIET 4th European Conference on Visual Media Production - London, UK United Kingdom
Duration: 27 Nov 200728 Nov 2007

Conference

ConferenceIET 4th European Conference on Visual Media Production
CountryUK United Kingdom
CityLondon
Period27/11/0728/11/07

Fingerprint Dive into the research topics of 'Automatic audio driven animation of non-verbal actions'. Together they form a unique fingerprint.

  • Cite this

    Cosker, D., Holt, C., Mason, D., Whatling, G., Marshall, D., & Rosin, P. L. (2007). Automatic audio driven animation of non-verbal actions. In IET 4th European Conference on Visual Media Production (CVMP 2007) (pp. 16). IET. https://doi.org/10.1049/cp:20070048