首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
MPEG-4提出的基于对象的编码格式,将人脸作为一个特殊的对象,为人脸建模和动画研究奠定了基础。本文通过对MPEG-4人脸动画标准的分析,提出基于MPEG-4人脸动画系统的设计思想和需解决的关键问题。  相似文献   

2.
基于机器学习的语音驱动人脸动画方法   总被引:19,自引:0,他引:19  
语音与唇动面部表情的同步是人脸动画的难点之一.综合利用聚类和机器学习的方法学习语音信号和唇动面部表情之间的同步关系,并应用于基于MEPG-4标准的语音驱动人脸动画系统中.在大规模音视频同步数据库的基础上,利用无监督聚类发现了能有效表征人脸运动的基本模式,采用神经网络学习训练,实现了从含韵律的语音特征到人脸运动基本模式的直接映射,不仅回避了语音识别鲁棒性不高的缺陷,同时学习的结果还可以直接驱动人脸网格.最后给出对语音驱动人脸动画系统定量和定性的两种分析评价方法.实验结果表明,基于机器学习的语音驱动人脸动画不仅能有效地解决语音视频同步的难题,增强动画的真实感和逼真性,同时基于MPEG-4的学习结果独立于人脸模型,还可用来驱动各种不同的人脸模型,包括真实视频、2D卡通人物以及3维虚拟人脸.  相似文献   

3.
论文提出了一种新的基于三维人脸形变模型,并兼容于MPEG-4的三维人脸动画模型。采用基于均匀网格重采样的方法建立原型三维人脸之间的对齐,应用MPEG-4中定义的三维人脸动画规则,驱动三维模型自动生成真实感人脸动画。给定一幅人脸图像,三维人脸动画模型可自动重建其真实感的三维人脸,并根据FAP参数驱动模型自动生成人脸动画。  相似文献   

4.
5.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

6.
人脸建模与动画是计算机图形学中最富有挑战性的课题之一,可以广泛应用于计算机动画行业、游戏行业、远程会议、代理和化身等许多领域。根据MPEG-4关于人脸特征点的定义,利用一般人脸模型以及特定人的正面和侧面两幅照片,采用Dirichlet自由变形实现了从一般人脸模型到特定人脸模型的变形;再利用特定人的正面和侧面照片,采用纹理融合方法生成了真实感人脸。论文提出的真实感人脸技术具有方法简单、计算速度快、真实感强等优点。  相似文献   

7.
王振 《电脑与信息技术》2010,18(5):11-12,37
表现人脸的皱纹特征是提高人脸动画真实感的重要因素之一,文章提出了一种基于关键帧的皱纹动画方法,使用高度图、法线图和MPEG-4人脸运动参数描述皱纹动画关键帧,通过对高度图、法线图插值生成皱纹动画中间帧。所提方法对人脸模型网格复杂度要求不高,且合成的皱纹动画具有真实感强和实时性高的特点。  相似文献   

8.
基于MPEG-4的人脸表情图像变形研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为了实时地生成自然真实的人脸表情,提出了一种基于MPEG-4人脸动画框架的人脸表情图像变形方法。该方法首先采用face alignment工具提取人脸照片中的88个特征点;接着在此基础上,对标准人脸网格进行校准变形,以进一步生成特定人脸的三角网格;然后根据人脸动画参数(FAP)移动相应的面部关键特征点及其附近的关联特征点,并在移动过程中保证在多个FAP的作用下的人脸三角网格拓扑结构不变;最后对发生形变的所有三角网格区域通过仿射变换进行面部纹理填充,生成了由FAP所定义的人脸表情图像。该方法的输入是一张中性人脸照片和一组人脸动画参数,输出是对应的人脸表情图像。为了实现细微表情动作和虚拟说话人的合成,还设计了一种眼神表情动作和口内细节纹理的生成算法。基于5分制(MOS)的主观评测实验表明,利用该人脸图像变形方法生成的表情脸像自然度得分为3.67。虚拟说话人合成的实验表明,该方法具有很好的实时性,在普通PC机上的平均处理速度为66.67 fps,适用于实时的视频处理和人脸动画的生成。  相似文献   

9.
Igor S. Pandzic   《Graphical Models》2003,65(6):385-404
We propose a method for automatically copying facial motion from one 3D face model to another, while preserving the compliance of the motion to the MPEG-4 Face and Body Animation (FBA) standard. Despite the enormous progress in the field of Facial Animation, producing a new animatable face from scratch is still a tremendous task for an artist. Although many methods exist to animate a face automatically based on procedural methods, these methods still need to be initialized by defining facial regions or similar, and they lack flexibility because the artist can only obtain the facial motion that a particular algorithm offers. Therefore a very common approach is interpolation between key facial expressions, usually called morph targets, containing either speech elements (visemes) or emotional expressions. Following the same approach, the MPEG-4 Facial Animation specification offers a method for interpolation of facial motion from key positions, called Facial Animation Tables, which are essentially morph targets corresponding to all possible motions specified in MPEG-4. The problem of this approach is that the artist needs to create a new set of morph targets for each new face model. In case of MPEG-4 there are 86 morph targets, which is a lot of work to create manually. Our method solves this problem by cloning the morph targets, i.e. by automatically copying the motion of vertices, as well as geometry transforms, from source face to target face while maintaining the regional correspondences and the correct scale of motion. It requires the user only to identify a subset of the MPEG-4 Feature Points in the source and target faces. The scale of the movement is normalized with respect to MPEG-4 normalization units (FAPUs), meaning that the MPEG-4 FBA compliance of the copied motion is preserved. Our method is therefore suitable not only for cloning of free facial expressions, but also of MPEG-4 compatible facial motion, in particular the Facial Animation Tables. We believe that Facial Motion Cloning offers dramatic time saving to artists producing morph targets for facial animation or MPEG-4 Facial Animation Tables.  相似文献   

10.
MPEG-4 body animation parameters (BAP) are used for animation of MPEG-4 compliant virtual human-like characters. Distributed virtual reality applications and networked games on mobile computers require access to locally stored or streamed compressed BAP data. Existing MPEG-4 BAP compression techniques are inefficient for streaming, or storing, BAP data on mobile computers, because: 1) MPEG-4 compressed BAP data entails a significant number of CPU cycles, hence significant, unacceptable power consumption, for the purpose of decompression, 2) the lossy MPEG-4 technique of frame dropping to reduce network throughput during streaming leads to unacceptable animation degradation, and 3) lossy MPEG-4 compression does not exploit structural information in the virtual human model. In this article, we propose two novel algorithms for lossy compression of BAP data, termed as BAP-Indexing and BAP-Sparsing. We demonstrate how an efficient combination of the two algorithms results in a lower network bandwidth requirement and reduced power for data decompression at the client end when compared to MPEG-4 compression. The algorithm exploits the structural information in the virtual human model, thus maintaining visually acceptable quality of the resulting animation upon decompression. Consequently, the hybrid algorithm for BAP data compression is ideal for streaming of motion animation data to power- and network-constrained mobile computers  相似文献   

11.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

12.
13.
杨璞  易法令  刘王飞  杨远发 《微机发展》2006,16(11):131-133
人脸是人类相互交流的重要渠道,是人类的喜、怒、哀、乐等复杂表情和语言的载体。因此,具有真实感的三维人脸模型的构造和变形是计算机图形学领域中一个研究热点。如何在三维人脸模型上产生具有真实感的人脸表情和动作,是其中的一个难点。文中介绍了一种基于Delaunay和Dirichlet/Voronoi图的Dirichlet自由变形算法(Dirichlet Free-Form De-formations,简称DFFD)解决这一问题。文中详细介绍了DFFD技术,并根据MPEG-4的脸部定义参数,应用DFFD对一般人脸进行变形。同时提出了在进行人脸变形时利用脸部定义参数FDP与脸部动画参数FAP分层次控制的方法,这种两级控制点控制的设置,使三维人脸模型产生光滑变形,由此可将人脸各种表情平滑准确地展现出来。  相似文献   

14.
《Graphical Models》2007,69(2):106-118
Facial Motion Cloning (FMC) is the technique employed to transfer the motion of a virtual face (namely the source) to a mesh representing another face (the target), generally having a different geometry and connectivity. In this paper, we describe a novel method based on the combination of the Radial Basis Functions (RBF) volume morphing with the encoding capabilities of the widely used MPEG-4 Facial and Body Animation (FBA) international standard. First, we find the morphing function G(P) that precisely fits the shape of the source into the shape of the target face. Then, all the MPEG-4 encoded movements of the source face are transformed using the same function G(P) and mapped to the corresponding vertices of the target mesh. By doing this, we obtain, in a straightforward and simple way, the whole set of the MPEG-4 encoded facial movements for the target face in a short time. This animatable version of the target face is able to perform generic face animation stored in a MPEG-4 FBA data stream.  相似文献   

15.
16.
X-VRML for advanced virtual reality applications   总被引:2,自引:0,他引:2  
Walczak  K. Cellary  W. 《Computer》2003,36(3):89-92
  相似文献   

17.
三维个性化人脸建模一直是计算机图形学中一个具有挑战性的课题。论文建立了一个基于照片的个性化人脸建模通用系统。根据MPEG-4确定出标准模型特征点的位置,从正面和侧面两幅照片出发,进行特征点编辑,获取人脸的关键特征点的位置,然后对标有对应特征点的标准模型进行变形,进行纹理映射,最后获得了真实的个性化人脸模型。该系统操作方便,可以快速地建立个性化人脸模型,为三维人脸动画提供了真实模型。  相似文献   

18.
在已经实现了“一个MPEG-4 兼容的人脸动画系统”和基于KD2000的“一个MPEG-4兼容的语音动画系统”的基础上,又设计并实现了一个基于SPI5.0的中文语音动画系统”。该文介绍该系统的设计思想和实现技术,包括定义中文可视音素,得到中文可视音素,估算可视音素的持续时间,处理表情标签,语音与动画同步等,语音动画系统在普通微机上就能够产生带有表情的高质量的语音动画。  相似文献   

19.
动画剧本描述语言SDL/A的设计与实现   总被引:7,自引:2,他引:5       下载免费PDF全文
本文介绍了基于时序逻辑的动画描述模型和基于此模型设计的动画剧本描述语言SDL/A.这种语言具有便于动画设计各个层次的描述、能够描述设计的逐步求精过程、能描述动画中的各种抽象对象以及角色间动作的同步等特点,并易于将这种基本的通用的剧本描述语言集成到一个功能强大的CASE环境-XYZ系统之中.本文主要介绍了SDL/A语言的设计思想和实现技术.  相似文献   

20.
Face and gesture recognition: overview   总被引:5,自引:0,他引:5  
Computerised recognition of faces and facial expressions would be useful for human-computer interface, and provision for facial animation is to be included in the ISO standard MPEG-4 by 1999. This could also be used for face image compression. The technology could be used for personal identification, and would be proof against fraud. Degrees of difference between people are discussed, with particular regard to identical twins. A particularly good feature for personal identification is the texture of the iris. A problem is that there is more difference between images of the same face with, e.g., different expression or illumination, than there sometimes is between images of different faces. Face recognition by the brain is discussed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号