Abstract
The ability to accurately capture and express emotions is a critical aspect of creating believable characters in video games and other forms of entertainment. Traditionally, this animation has been achieved with artistic effort or performance capture, both requiring costs in time and labor. More recently, audio-driven models have seen success, however, these often lack expressiveness in areas not correlated to the audio signal. In this paper, we present a novel approach to facial animation by taking existing animations and allowing for the modification of style characteristics. We maintain the lip-sync of the animations with this method thanks to the use of a novel viseme-preserving loss. We perform quantitative and qualitative experiments to demonstrate the effectiveness of our work.
Original language | English |
---|---|
Title of host publication | Eurographics 2024 |
Publisher | Eurographics Association |
Number of pages | 4 |
Volume | 43 |
Edition | 2 |
DOIs | |
Publication status | Published - 23 Apr 2024 |