共查询到18条相似文献,搜索用时 140 毫秒
1.
三维人脸动画是计算机图形学领域的热点课题。针对目前三维动画模型对人脸的模拟难度高且效果不够逼真地问题,为了简洁且逼真的模拟人脸表情动作,提出了一种拟合抽象肌肉模型。该模型基于人脸动画模型中常用的抽象肌肉模型,对其中宽线性肌的数学模型进行改进,利用形变参数控制宽线性肌的形态,对面部肌肉动作直接进行模拟。仿真实验表明,利用拟合抽象肌肉模型能够更为真实地模拟出复杂的嘴部动作。因此,拟合抽象肌肉模型与传统的抽象肌肉模型相比,实现的计算复杂度不高,并且模拟效果更加逼真,具有广阔的应用前景。 相似文献
2.
提出了一种无标记点的人脸表情捕捉方法.首先根据ASM(Active Shape Model)人脸特征点生成了覆盖人脸85%面部特征的人脸均匀网格模型;其次,基于此人脸模型提出了一种表情捕捉方法,使用光流跟踪特征点的位移变化并辅以粒子滤波稳定其跟踪结果,以特征点的位移变化驱动网格整体变化,作为网格跟踪的初始值,使用网格的形变算法作为网格的驱动方式.最后,以捕捉到的表情变化数据驱动不同的人脸模型,根据模型的维数不同使用不同的驱动方法来实现表情动画重现,实验结果表明,提出的算法能很好地捕捉人脸表情,将捕捉到的表情映射到二维卡通人脸和三维虚拟人脸模型都能取得较好的动画效果. 相似文献
3.
4.
利用计算机进行人脸造型、建立人脸表情动画以及对人脸肖像进行夸张处理是当前的一项热门研究课题,其技术和产品有着广阔的应用前景和巨大的经济效益。文中研究了人脸表情特征,选择一种合适的人脸表情动画和人脸变形技术的方法,并将二者结合在一起,设计了一个可以实现人脸卡通动画效果的系统。 相似文献
5.
6.
7.
8.
实时逼真的人脸表情动画技术是计算机图形学研究领域中一个富有挑战性的课题,文章针对已有物理模型算法的计算复杂、效果粗糙等问题,阐述在Direct3D开发平台下给出的一种将物理模型与渐变表情算法结合的组合渐变动画算法及用其生成真实感表情动画的实现过程。实验表明,这种方法极大地增强了人脸表情动画生成的真实性。 相似文献
9.
特定三维人脸的建模与动画是计算机图形学中一个非常令人感兴趣的领域.本文提出了一种新的从两幅正交照片建立特定人脸的模型以及动画方法,首先以主动轮廓跟踪技术snake自动获取人脸特征点的准确位置,然后以文中的局部弹性变形(local elastic deformation)方法进行通用人脸模型到特定人脸的定制,并辅以采用图像镶嵌技术生成的大分辨率纹理图像施行纹理绘制,该方法以特征点的位移和非特征点与特征点的相对位置为基础计算局部人脸面部的变形,同时还能够实现人脸剧烈的面部变化和动作,与肌肉模型相结合,可很好地实时完成人脸的动画,具有快速高效的特点.最后,给出了所得到的实验结果. 相似文献
10.
11.
《Mechatronics》2021
The development of artificial muscles has focused on a high energy–weight ratio and soft structures, however, little work has been done towards muscle-like contraction behaviors. This unfortunately leads to a lack of comfort and safety, which is especially important for robotic applications like orthotics and exoskeletons. In this paper, we propose a contraction control method for a tendon-sheath artificial muscle to contract and relax like biological muscles. In view of the nonlinear transmission characteristics of the tendon-sheath artificial muscle, a transmission model is established and its accuracy is verified through experiments. A muscle-like contraction control method is then proposed based on the transmission model and the Hill-type muscle model. Through this method, the contraction force of the tendon-sheath artificial muscle can be adjusted according to the output displacement and velocity of the tendon-sheath mechanism estimated by the transmission model. Isometric contraction and quick-release experiments are then conducted. The experimental results demonstrate that this control method allows the tendon-sheath artificial muscle to contract with specific muscle-like force–length and force–velocity properties. 相似文献
12.
Seongah Chin Kyoung-Yun Kim 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》2009,39(3):315-330
People instinctively recognize facial expression as a key to nonverbal communication, which has been confirmed by many different research projects. A change in intensity or magnitude of even one specific facial expression can cause different interpretations. A systematic method for generating facial expression syntheses, while mimicking realistic facial expressions and intensities, is a strong need in various applications. Although manually produced animation is typically of high quality, the process is slow and costly-therefore, often unrealistic for low polygonal applications. In this paper, we present a simple and efficient emotional-intensity-based expression cloning process for low-polygonal-based applications, by generating a customized face, as well as by cloning facial expressions. We define intensity mappings to measure expression intensity. Once a source expression is determined by a set of suitable parameter values in a customized 3D face and its embedded muscles, expressions for any target face(s) can be easily cloned by using the same set of parameters. Through experimental study, including facial expression simulation and cloning with intensity mapping, our research reconfirms traditional psychological findings. Additionally, we discuss the method's overall usability and how it allows us to automatically adjust a customized face with embedded facial muscles while mimicking the user's facial configuration, expression, and intensity. 相似文献
13.
Boulbaba Ben Amor Hassen Drira Lahoucine Ballihi Anuj Srivastava Mohamed Daoudi 《电信纪事》2009,64(5-6):369-379
The main goal of this paper is to illustrate a geometric analysis of 3D facial shapes in the presence of varying facial expressions. This approach consists of the following two main steps: (1) Each facial surface is automatically denoised and preprocessed to result in an indexed collection of facial curves. During this step, one detects the tip of the nose and defines a surface distance function with that tip as the reference point. The level curves of this distance function are the desired facial curves. (2) Comparisons between faces are based on optimal deformations from one to another. This, in turn, is based on optimal deformations of the corresponding facial curves across surfaces under an elastic metric. The experimental results, generated using a subset of the Face Recognition Grand Challenge v2 data set, demonstrate the success of the proposed framework in recognizing people under different facial expressions. The recognition rates obtained here exceed those for a baseline ICP algorithm on the same data set. 相似文献
14.
This paper describes a new and efficient method for facial expression generation on cloned synthetic head models. The system uses abstract facial muscles called action units (AUs) based on both anatomical muscles and the facial action coding system. The facial expression generation method has real-time performance, is less computationally expensive than physically based models, and has greater anatomical correspondence than rational free-form deformation or spline-based, techniques. Automatic cloning of a real human head is done by adapting a generic facial and head mesh to Cyberware laser scanned data. The conformation of the generic head to the individual data and the fitting of texture onto it are based on a fully automatic feature extraction procedure. Individual facial animation parameters are also automatically estimated during the conformation process. The entire animation system is hierarchical; emotions and visemes (the visual mouth shapes that occur during speech) are defined in terms of the AUs, and higher-level gestures are defined in terms of AUs, emotions, and visemes as well as the temporal relationships between them. The main emphasis of the paper is on the abstract muscle model, along with limited discussion on the automatic cloning process and higher-level animation control aspects. 相似文献
15.
Facial expressions are an important source of information for human interaction. Therefore, it would be desirable if computers were able to use this information to interact more naturally with the user. However, facial expressions are not always unambiguously interpreted even by competent humans. Consequently, soft computing techniques in which interpretations are given some belief value would seem appropriate. This paper describes how the mass assignment approach to constructing fuzzy sets from probability distributions has been applied to the low-level classification of pixels into facial feature classes based on their colour. It will also describe how similar approaches can be used for the analysis of facial expressions themselves. 相似文献
16.
17.
Yang Yang Nanning ZhengYuehu Liu Shaoyi DuYuanqi Su Yoshifumi Nishio 《Signal processing》2011,91(11):2465-2477
This paper presents a hierarchical animation method for transferring facial expressions extracted from a performance video to different facial sketches. Without any expression example obtained from target faces, our approach can transfer expressions by motion retargetting to facial sketches. However, in practical applications, the image noise in each frame will reduce the feature extraction accuracy from source faces. And the shape difference between source and target faces will influence the animation quality for representing expressions. To solve these difficulties, we propose a robust neighbor-expression transfer (NET) model, which aims at modeling the spatial relations among sparse facial features. By learning expression behaviors from neighbor face examples, the NET model can reconstruct facial expressions from noisy signals. Based on the NET model, we present a hierarchical method to animate facial sketches. The motion vectors on the source face are adjusted from coarse to fine on the target face. Accordingly, the animation results are generated to replicate source expressions. Experimental results demonstrate that the proposed method can effectively and robustly transfer expressions by noisy animation signals. 相似文献
18.
The face is the window to the soul. This is what the 19th-century French doctor Duchenne de Boulogne thought. Using electric shocks to stimulate muscular contractions and induce bizarre-looking expressions, he wanted to understand how muscles produce facial expressions and reveal the most hidden human emotions. Two centuries later, this research field remains very active. We see automatic systems for recognizing emotion and facial expression being applied in medicine, security and surveillance systems, advertising and marketing, among others. However, there are still fundamental questions that scientists are trying to answer when analyzing a person’s emotional state from their facial expressions. Is it possible to reliably infer someone’s internal state based only on their facial muscles’ movements? Is there a universal facial setting to express basic emotions such as anger, disgust, fear, happiness, sadness, and surprise? In this research, we seek to address some of these questions through convolutional neural networks. Unlike most studies in the prior art, we are particularly interested in examining whether characteristics learned from one group of people can be generalized to predict another’s emotions successfully. In this sense, we adopt a cross-dataset evaluation protocol to assess the performance of the proposed methods. Our baseline is a custom-tailored model initially used in face recognition to categorize emotion. By applying data visualization techniques, we improve our baseline model, deriving two other methods. The first method aims to direct the network’s attention to regions of the face considered important in the literature but ignored by the baseline model, using patches to hide random parts of the facial image so that the network can learn discriminative characteristics in different regions. The second method explores a loss function that generates data representations in high-dimensional spaces so that examples of the same emotion class are close and examples of different classes are distant. Finally, we investigate the complementarity between these two methods, proposing a late-fusion technique that combines their outputs through the multiplication of probabilities. We compare our results to an extensive list of works evaluated in the same adopted datasets. In all of them, when compared to works that followed an intra-dataset protocol, our methods present competitive numbers. Under a cross-dataset protocol, we achieve state-of-the-art results, outperforming even commercial off-the-shelf solutions from well-known tech companies. 相似文献