Performance-driven cutout character animation. Actors perform customized expressions in (a) e.g. “disdainful” (top) and “daydreaming” (bottom) to animate the expressions of various cutout characters in (b). Note that the large inter-person expression variations even within the same expression category.
Performance-driven character animation enables users to create expressive results by performing the desired motion of the character with their face and/or body. How- ever, for cutout animations where continuous motion is combined with discrete artwork replacements, supporting a performance-driven workflow has some unique requirements. To trigger the appropriate artwork replacements, the system must reliably detect a wide range of customized facial expressions that are challenging for existing recognition methods, which focus on a few canonical expressions (e.g., angry, disgusted, scared, happy, sad and surprised). Also, real usage scenarios require the system to work in realtime with minimal training.
In this paper, we propose a novel customized expression recognition technique that meets all of these requirements. We first use a set of handcrafted features combining geometric features derived from facial landmarks and patch-based appearance features through group sparsity- based facial component learning. To improve discrimination and generalization, these handcrafted features are integrated into a custom-designed Deep Convolutional Neural Network (CNN) structure trained from publicly available facial expression datasets. The combined features are fed to an online ensemble of SVMs designed for the few train- ing sample problem and performs in realtime. To improve temporal coherence, we also apply a Hidden Markov Mod- el (HMM) to smooth the recognition results. Our system achieves state-of-the-art performance on canonical expression datasets and promising results on our collected dataset of customized expressions.