Animated Talking Head with Personalized 3D Head Model
Abstract
Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper, we report a semi-automatic method for adapting a generic head model to 3D range data of a human head obtained from a 3D-laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model. The model created with the proposed method compares favorable to generic models.