首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
三维人脸动画是计算机图形学领域的热点课题。针对目前三维动画模型对人脸的模拟难度高且效果不够逼真地问题,为了简洁且逼真的模拟人脸表情动作,提出了一种拟合抽象肌肉模型。该模型基于人脸动画模型中常用的抽象肌肉模型,对其中宽线性肌的数学模型进行改进,利用形变参数控制宽线性肌的形态,对面部肌肉动作直接进行模拟。仿真实验表明,利用拟合抽象肌肉模型能够更为真实地模拟出复杂的嘴部动作。因此,拟合抽象肌肉模型与传统的抽象肌肉模型相比,实现的计算复杂度不高,并且模拟效果更加逼真,具有广阔的应用前景。  相似文献   

2.
吴晓军  鞠光亮 《电子学报》2016,44(9):2141-2147
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果.  相似文献   

3.
该文给出了一种描述人眼运动及表情的多线谱表示及其获取的方法。该方法以人眼三维肌肉控制模型(3D-MCE)为基础,用一组归一化肌肉收缩量随时间变化的多线谱曲线来表示人眼运动及表情;给出了由单视频获取人眼多线谱的方法,通过跟踪人眼控制点的位移,计算肌肉收缩量,进而获得其随时间变化的曲线。实验结果表明,所获得曲线能较逼真地模拟人的眼部运动和部分表情。  相似文献   

4.
利用计算机进行人脸造型、建立人脸表情动画以及对人脸肖像进行夸张处理是当前的一项热门研究课题,其技术和产品有着广阔的应用前景和巨大的经济效益。文中研究了人脸表情特征,选择一种合适的人脸表情动画和人脸变形技术的方法,并将二者结合在一起,设计了一个可以实现人脸卡通动画效果的系统。  相似文献   

5.
李晓峰  赵海  葛新  程显永 《电子学报》2010,38(5):1167-1171
由于外界环境的不确定性和人脸的复杂性,人脸表情的跟踪与计算机形象描绘是一个较难问题.基于此问题,提出了一种有别于模式识别、样本学习等传统手段的较为简单解决方法,在视频采集条件下,分析帧图像,通过对比多种边缘检测方法,采用一种基于边缘特征提取的人脸表情建模方法,来完成用于表情描绘的面部特征量提取与建模,并结合曲线拟合和模型控制等手段,进行人脸卡通造型生成和二维表情动画模拟.实现了从输入数据生成卡通造型画并真实地表现出表情变化情况.  相似文献   

6.
脸部动画中的肌肉和皮肤动态模型   总被引:1,自引:1,他引:0       下载免费PDF全文
本文提出了以脸部组织学、解剖学和生物力学特点为基础的脸部肌肉和皮肤运动的动态模型, 该模型以分层设计的规则逻辑网格构造脸部轮廓,以脸部动作编码系统的运动单元为依据建立分层仿真的组织模型,采用肌肉的动力和弹力性能及皮肤层的体保持力和位恢复力性能控制脸部组织运动, 从而产生相应的脸部表情的动画.  相似文献   

7.
三维扫描仪可以准确获取人脸的几何形状与纹理,但原始的人脸扫描数据仅为一张连续曲面,不符合实际的人脸结构,无法用于人脸动画。针对此问题,提出了一种由三雏扫描数据进行人脸建模的方法,将一个具备完整结构的通用人脸模型与扫描数据进行初步适配,再采用细节重建技术恢复特定人脸的表面细节和皮肤纹理。实验表明,该方法建立的三维人脸模型真实感强,结构完整,可生成连续自然的表情动画。  相似文献   

8.
实时逼真的人脸表情动画技术是计算机图形学研究领域中一个富有挑战性的课题,文章针对已有物理模型算法的计算复杂、效果粗糙等问题,阐述在Direct3D开发平台下给出的一种将物理模型与渐变表情算法结合的组合渐变动画算法及用其生成真实感表情动画的实现过程。实验表明,这种方法极大地增强了人脸表情动画生成的真实性。  相似文献   

9.
特定三维人脸的建模与动画是计算机图形学中一个非常令人感兴趣的领域.本文提出了一种新的从两幅正交照片建立特定人脸的模型以及动画方法,首先以主动轮廓跟踪技术snake自动获取人脸特征点的准确位置,然后以文中的局部弹性变形(local elastic deformation)方法进行通用人脸模型到特定人脸的定制,并辅以采用图像镶嵌技术生成的大分辨率纹理图像施行纹理绘制,该方法以特征点的位移和非特征点与特征点的相对位置为基础计算局部人脸面部的变形,同时还能够实现人脸剧烈的面部变化和动作,与肌肉模型相结合,可很好地实时完成人脸的动画,具有快速高效的特点.最后,给出了所得到的实验结果.  相似文献   

10.
邱玉  赵杰煜  汪燕芳 《电子学报》2016,44(6):1307-1313
脸部肌肉之间的时空关系在人脸表情识别中起着重要作用,而当前的模型无法高效地捕获人脸的复杂全局时空关系使其未被广泛应用.为了解决上述问题,本文提出一种基于区间代数贝叶斯网络的人脸表情建模方法,该方法不仅能够捕获脸部的空间关系,也能捕获脸部的复杂时序关系,从而能够更加有效地对人脸表情进行识别.且该方法仅利用基于跟踪的特征且不需要手动标记峰值帧,可提高训练与识别的速度.在标准数据库CK+和MMI上进行实验发现本文方法在识别人脸表情过程中有效提高了准确率.  相似文献   

11.
The development of artificial muscles has focused on a high energy–weight ratio and soft structures, however, little work has been done towards muscle-like contraction behaviors. This unfortunately leads to a lack of comfort and safety, which is especially important for robotic applications like orthotics and exoskeletons. In this paper, we propose a contraction control method for a tendon-sheath artificial muscle to contract and relax like biological muscles. In view of the nonlinear transmission characteristics of the tendon-sheath artificial muscle, a transmission model is established and its accuracy is verified through experiments. A muscle-like contraction control method is then proposed based on the transmission model and the Hill-type muscle model. Through this method, the contraction force of the tendon-sheath artificial muscle can be adjusted according to the output displacement and velocity of the tendon-sheath mechanism estimated by the transmission model. Isometric contraction and quick-release experiments are then conducted. The experimental results demonstrate that this control method allows the tendon-sheath artificial muscle to contract with specific muscle-like force–length and force–velocity properties.  相似文献   

12.
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity.  相似文献   

13.
The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in the presence of varying facial expressions. This approach consists of the following two main steps: (1) Each facial surface is automatically denoised and preprocessed to result in an indexed collection of facial curves. During this step, one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired facial curves. (2) Comparisons between faces are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding facial curves across surfaces under an elastic metric. The experimental results, generated using a subset of the Face Recognition Grand Challenge v2 data set, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same data set.  相似文献   

14.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects.  相似文献   

15.
Facial expressions are an important source of information for human interaction. Therefore, it would be desirable if computers were able to use this information to interact more naturally with the user. However, facial expressions are not always unambiguously interpreted even by competent humans. Consequently, soft computing techniques in which interpretations are given some belief value would seem appropriate. This paper describes how the mass assignment approach to constructing fuzzy sets from probability distributions has been applied to the low-level classification of pixels into facial feature classes based on their colour. It will also describe how similar approaches can be used for the analysis of facial expressions themselves.  相似文献   

16.
17.
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals.  相似文献   

18.
The face is the window to the soul. This is what the 19th-century French doctor Duchenne de Boulogne thought. Using electric shocks to stimulate muscular contractions and induce bizarre-looking expressions, he wanted to understand how muscles produce facial expressions and reveal the most hidden human emotions. Two centuries later, this research field remains very active. We see automatic systems for recognizing emotion and facial expression being applied in medicine, security and surveillance systems, advertising and marketing, among others. However, there are still fundamental questions that scientists are trying to answer when analyzing a person’s emotional state from their facial expressions. Is it possible to reliably infer someone’s internal state based only on their facial muscles’ movements? Is there a universal facial setting to express basic emotions such as anger, disgust, fear, happiness, sadness, and surprise? In this research, we seek to address some of these questions through convolutional neural networks. Unlike most studies in the prior art, we are particularly interested in examining whether characteristics learned from one group of people can be generalized to predict another’s emotions successfully. In this sense, we adopt a cross-dataset evaluation protocol to assess the performance of the proposed methods. Our baseline is a custom-tailored model initially used in face recognition to categorize emotion. By applying data visualization techniques, we improve our baseline model, deriving two other methods. The first method aims to direct the network’s attention to regions of the face considered important in the literature but ignored by the baseline model, using patches to hide random parts of the facial image so that the network can learn discriminative characteristics in different regions. The second method explores a loss function that generates data representations in high-dimensional spaces so that examples of the same emotion class are close and examples of different classes are distant. Finally, we investigate the complementarity between these two methods, proposing a late-fusion technique that combines their outputs through the multiplication of probabilities. We compare our results to an extensive list of works evaluated in the same adopted datasets. In all of them, when compared to works that followed an intra-dataset protocol, our methods present competitive numbers. Under a cross-dataset protocol, we achieve state-of-the-art results, outperforming even commercial off-the-shelf solutions from well-known tech companies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号