首页 | 本学科首页   官方微博 | 高级检索  
     


Hidden Markov Model Inversion for Audio-to-Visual Conversion in an MPEG-4 Facial Animation System
Authors:Kyoungho Choi  Ying Luo and Jenq-Neng Hwang
Affiliation:(1) Information Processing Lab., Department of Electrical Engineering, University of Washington, Box #352500, Seattle, WA 98195-2500, USA
Abstract:MPEG-4 standard allows composition of natural or synthetic video with facial animation. Based on this standard, an animated face can be inserted into natural or synthetic video to create new virtual working environments such as virtual meetings or virtual collaborative environments. For these applications, audio-to-visual conversion techniques can be used to generate a talking face that is synchronized with the voice. In this paper, we address audio-to-visual conversion problems by introducing a novel Hidden Markov Model Inversion (HMMI) method. In training audio-visual HMMs, the model parameters {lambdaav} can be chosen to optimize some criterion such as maximum likelihood. In inversion of audio-visual HMMs, visual parameters that optimize some criterion can be found based on given speech and model parameters {lambdaav}. By using the proposed HMMI technique, an animated talking face can be synchronized with audio and can be driven realistically. The HMMI technique combined with MPEG-4 standard to create a virtual conference system, named VIRTUAL-FACE, is introduced to show the role of HMMI for applications of MPEG-4 facial animation.
Keywords:HMMI  audio-to-visual conversion  MPEG-4  facial animation
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号